When a Robot Reaches Out for Human Help

Jan 1, 2018·
Ignasi Andrés
,
Leliane Nunes De Barros
,
Denis Deratani Mauá
,
Thiago Dias Simão
· 0 min read
Abstract
In many realistic planning situations, any policy has a non-zero probability of reaching a dead-end. In such cases, a popular approach is to plan to maximize the probability of reaching the goal. While this strategy increases the robustness and expected autonomy of the robot, it considers that the robot gives up on the task whenever a dead-end is encountered. In this work, we consider planning for agents that pro-actively and autonomously resort to human help when an unavoidable dead-end is encountered (the so-called symbiotic agents). To this end, we develop a new class of Goal-Oriented Markov Decision Process that includes a set of human actions that ensures the existence of a proper policy, one that possibly resorts to human help. We discuss two different optimization criteria: minimizing the probability to use human help and minimizing the expected cumulative cost with a finite penalty for using human help for the first time. We show that for a large enough penalty both criteria are equivalent. We report on experiments with standard probabilistic planning domains for reasonably large problems.
Type
Publication
Advances in Artificial Intelligence: Proceedings of the 16th Ibero-American Conference on Artificial Intelligence