Generating Explicable Plans using Plan Distances

Explicable Robot Planning as Minimizing Distance from Expected Behavior Paper
A. Kulkarni , T. Chakraborti, Y. Zha, Y. Zhang, S. Vadlamudi, S. Kambhampati
Preprint, arXiv:1611.05497, 2016


In this work, we assume that the human’s mental model of the robot’s model is available to us. We propose a methodology to generate explicable robot plans using plan distance metrics. The robot plans are assigned scores by the humans which determine their degree of explicability. Through regression, a scoring function based on the different plan distance measures is learned, called explicability distance. We then use this distance as a heuristic to generate the robot plans. We do this by minimizing the plan distances between robot plans and explicable plan traces.


Plan Explicability and Predictability

Plan Explicability for Robot Task Planning Paper Poster Slides
Y. Zhang, S. Sreedharan, A. Kulkarni , T. Chakraborti, H. Zhuo, S. Kambhampati
RSS Workshop on Planning for Human-Robot Interaction, 2016


For autonomous agents working alongside humans, one important requirement is to synthesize plans that can be easily understood by humans. To address this issue, we introduce the notions of plan explicability and predictability. To compute these measures, first, we postulate that humans understand agent plans by associating abstract tasks with agent actions, which can be considered as a labeling process. We learn the labeling scheme of humans for agent plans from training examples using conditional random fields (CRFs). Then, we use the learned model to label a new plan to compute its explicability and predictability. We provide evaluations on a synthetic domain and with human subjects using physical robot to show the effectiveness of our approach.































Plan Explicability for Human Robot Collaborations

For the robots to be part of our daily lives, their ability to form safe and successful collaborations with humans is a necessity. To ensure smoother collaborations, we provide a framework for human-robot teams using the concept of explicability. Our contributions include extending explicability formulation to support interactive human-robot teaming and implementing the framework on a physical robotic platform. We make a reasonable assumption that an agent might only have an approximate version of other agent’s model, i.e. the robot not only has to learn human’s preconceptions about its own model, but also has to work with incomplete human planning preferences.