The aim of our project is to leverage the power of the cloud in building a shared workspace between humans and robots where theses agents can effectively communicate across multiple dimensions, including EEG signals and augmented reality, and thus collaborate smoothly and safely.
Check out our demos here and here!
The manufacturing industry has over the last few years have invested heavily in automating the factory floor, with an exponential number of robots being deployed every year to work alongside humans in the assembly line. However, the recent fatal incident in the Volkswagen factory, has tainted this rosy picture, and somewhat reigned in the promise of robots and humans collaborating on the factory floor. So we ask what exactly went wrong?
Despite the massive progress made in natural language processing, natural language understanding is still a largely unsolved problem, and as such robots find it difficult to understand human intentions and also to express their own. This impedance mismatch poses a significant barrier towards realizing the promises of effective human-robot interaction and collaboration. We, at ÆRobotics, aim to address this challenge by (1) bridging the gap from the human's side using brain signals such as EEG to make a representation of the mental model of the human directly accessible to the robot, and (2) bridging the gap from the robot's side using augmented reality to provide a mutually understood platform or vocabulary of communication.
Tathagata Chakraborti (team lead) is a fourth year Ph.D. student at Arizona State University. His research interests include planning with humans-in-the-loop, with applications in task planning for human-robot teaming and cohabitation, and proactive decision support. His research has featured in premier research conferences and workshops in the field of artificial intelligence and robotics worldwide, including AAAI, AAMAS, ICAPS, IROS, ICRA, etc. He has also received the IBM Ph.D. Fellowship and multiple University Graduate Fellowship Awards in recognition of his work.
Sarath Sreedharan is a first year Ph.D. student at Arizona State University. His research interests lie in the intersection of automated planning and Human-aware AI. While not working on the Imagine Cup project, he is usually busy teaching robots to better explain their decisions. His research has been featured in various premier research conferences, including AAMAS, ICAPS, ICRA, etc and journals like AIJ. Before starting his graduate studies, he worked as a senior software engineer at Zynga, where he helped scale up social games to support millions of players.
Anagha Kulkarni is a second year Ph.D. student majoring in Computer Science at Arizona State University. Her research interests lie in the field of Automated Planning in AI, specifically in addressing the challenges inherent in planning for human-robot team settings. She is interested in making robots behave in comprehensible fashion such that it fosters human-robot teaming. Her research work has been featured in premier robotics conferences like ICRA and RSS. Before joining ASU in 2015, she received her Master's degree in Computer Science from the University of Southern California.
Subbarao Kambhampati is a professor of Computer Science & Engineering at Arizona State University, where he directs the Yochan research group. He is the Ph.D. advisor of all three members of the team. His research interests are primarily in Artificial Intelligence, and include planning and decision making, human-robot teaming and human-aware AI. He has published extensively - Google Scholar counts over 7600 citations to his work, and gives an h-index of 45. He is currently the President of the Association for the Advancement of Artificial Intelligence (AAAI), as well as an elected fellow of AAAI. He is acknowledged as an authority on automated planning and several of his contributions have found their way into popular AI textbooks. He has received multiple awards for his teaching, including best teacher awards at college and department level, and a last lecture selection at the university level.
Yu Zhang is an Assistant Professor at Arizona State University. His research interests focus on innovating and applying AI and Machine Learning methods to human-robot teaming, multi-agent systems, and automated planning and reasoning. He brings a unique perspective to the team, with his expertise at the intersection of AI and Robotics. In fact, the team's project grew out of the Human Aware Robotics class he taught in Fall 2016.
Heni Ben Amor is an Assistant Professor at Arizona State University where he leads the ASU Interactive Robotics Laboratory. Heni's research topics focus on artificial intelligence, machine learning, human-robot interaction, robot vision, and automatic motor skill acquisition. The team started their robotics journey in his Robot Learning class in Fall 2015, and he remains the robotics Guru of the team.
Chris Blais is an Assistant Research Professor with expertise in cognitive neuroscience utilizing methods including fMRI, EEG, computational modeling, and behavioral data. His research focuses on the interactions between implicit memory and cognitive control. He is the "brains" for all things EEG for the team. He is also helping provide access to infrastructure for the team's EEG experiments at the ASU EEG Lab.