部分可观测马尔可夫决策过程
计算机科学
机器人
人机交互
任务(项目管理)
人机交互
人工智能
表(数据库)
过程(计算)
马尔可夫决策过程
马尔可夫过程
马尔可夫链
机器学习
马尔可夫模型
工程类
数据挖掘
系统工程
操作系统
统计
数学
作者
Min Chen,Stefanos Nikolaidis,Harold Soh,David Hsu,Siddhartha S Srinivasa
标识
DOI:10.1145/3171221.3171264
摘要
Trust is essential for human-robot collaboration and user adoption of autonomous systems, such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human behaviors, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). The results show that the trust-POMDP improves human-robot team performance in this task. They further suggest that maximizing trust in itself may not improve team performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI