监督人
机器人
计算机科学
任务(项目管理)
强迫(数学)
人机交互
过程(计算)
人机交互
光学(聚焦)
知识管理
基于行为的机器人学
人工智能
工程类
移动机器人
物理
系统工程
光学
气候学
地质学
政治学
法学
操作系统
作者
Zahra Zahedi,Mudit Verma,Sarath Sreedharan,Subbarao Kambhampati
出处
期刊:Cornell University - arXiv
日期:2021-01-01
标识
DOI:10.48550/arxiv.2105.01220
摘要
Trust between team members is an essential requirement for any successful cooperation. Thus, engendering and maintaining the fellow team members' trust becomes a central responsibility for any member trying to not only successfully participate in the task but to ensure the team achieves its goals. The problem of trust management is particularly challenging in mixed human-robot teams where the human and the robot may have different models about the task at hand and thus may have different expectations regarding the current course of action, thereby forcing the robot to focus on the costly explicable behavior. We propose a computational model for capturing and modulating trust in such iterated human-robot interaction settings, where the human adopts a supervisory role. In our model, the robot integrates human's trust and their expectations about the robot into its planning process to build and maintain trust over the interaction horizon. By establishing the required level of trust, the robot can focus on maximizing the team goal by eschewing explicit explanatory or explicable behavior without worrying about the human supervisor monitoring and intervening to stop behaviors they may not necessarily understand. We model this reasoning about trust levels as a meta reasoning process over individual planning tasks. We additionally validate our model through a human subject experiment.
科研通智能强力驱动
Strongly Powered by AbleSci AI