Hello world! You've reached the homepage of Tathagata Chakraborti. I am a Ph.D. student in Dr. Subbarao Kambhampati's research group Yochan at Arizona State University. My research is in the field of Artificial Intelligence, particularly in task planning with humans in the loop - e.g. challenges involved in planning for human-robot cohabitation and collaboration.
In this project we explore how we can do better than natural language for communicating with robots in structured environments like the next generation manufacturing scene where wearables such as augmented reality devices and EEG headsets can enable alternative and more effective modes of information sharing and interaction.The Augmented Workspace
Here the robot uses a holographic vocabulary based on the projection of explicit visual cues that they can be intuitively understood and directly read by the human in the loop. The “real” shared human-robot workspace is now thus augmented with the virtual space where the physical environment is used as a medium to convey information about the intended actions of the robot, the safety of the workspace, or task-related instructions.The Consciousness Cloud
This provides the robots real-time access to the mental state of humans in the workspace using which they are able to query about particulars (e.g. stress levels) of the affective states, or receive specific alerts related to the human’s response to events (e.g. oddball incidents like safety hazards and corresponding ERP spikes) in the environment.
As opposed to peer-to-peer information sharing, our cloud-based approach - where all the agents log their and access each other's real/virtual/mental states onto a central server - provides a distinct advantage towards making the system scalable to multiple agents, both humans and robots, sharing and collaborating in the same workspace.
The design of autonomy cannot be without consideration of interactions with humans - in case of planning with humans in the loop this spawns a plethora of research challenges. The core challenge here is the planner must not only reason with its own model but also the human's perception of it - we refer to this as the Multi-Model Planning setting.Plan Explicabiltiy & Explanations
Here we investigate how a planner, in light of such model differences, can produce a plan that is explicable to a human observer, as well as techniques to produce explanations of a plan if required. We call this process model-reconciliation, where with repeated interactions the planner brings the human's mental model of itself closer to the ground truth.Proactive Decistion Support & Crowdsourced Planning
Here we show how automated planning technology can be used to aid human planners involved in computational tasks that involve planning, such as while making response strategies as a team of experts to disasters or while, as part of a crowd, making travel plans to requests on Amazon Mechanical Turk.
As robots become ubiquitous in our daily lives and in traditional human workflows, they will cohabit workspaces with humans without forming explicit teams, i.e. lack of shared goals and communication. In this line of work, I identify challenges for human-aware task planning that arise in such settings, as opposed to in traditional notions of human-robot teaming, and investigate algorithms for stigmergic forms of collaboration - e.g. planning with resource conflicts, planning for serendipity and proactive support. Key challenges here involve modeling the human's belief state, predicting how the environment will evolve, robust and efficient representation of data, and developing planners that can model and reason with such complex interaction constraints.