杠杆(统计)
试验台
机器人
计算机科学
人机交互
人机交互
透明度(行为)
任务(项目管理)
人工智能
领域(数学分析)
过程(计算)
计算机安全
万维网
工程类
数学分析
系统工程
操作系统
数学
作者
Ning Wang,David V. Pynadath,Steven C. Hill
标识
DOI:10.1109/hri.2016.7451741
摘要
Trust is a critical factor for achieving the full potential of human-robot teams. Researchers have theorized that people will more accurately trust an autonomous system, such as a robot, if they have a more accurate understanding of its decision-making process. Studies have shown that hand-crafted explanations can help maintain trust when the system is less than 100% reliable. In this work, we leverage existing agent algorithms to provide a domain-independent mechanism for robots to automatically generate such explanations. To measure the explanation mechanism's impact on trust, we collected self-reported survey data and behavioral data in an agent-based online testbed that simulates a human-robot team task. The results demonstrate that the added explanation capability led to improvement in transparency, trust, and team performance. Furthermore, by observing the different outcomes due to variations in the robot's explanation content, we gain valuable insight that can help lead to refinement of explanation algorithms to further improve human-robot trust calibration.
科研通智能强力驱动
Strongly Powered by AbleSci AI