妥协
机器人
计算机科学
风险分析(工程)
人机交互
业务
心理学
人工智能
政治学
法学
作者
Gerald Matthews,Ryon Cumings,James R. Casey,April Rose Panganiban,Antonio Chella,Arianna Pipitone,Jinchao Lin,Mustapha Mouloua
标识
DOI:10.1177/10711813241276449
摘要
Advancements in Artificial Intelligence (AI) will produce “reasonable disagreements” between human operators and machine partners. A simulation study investigated factors that may influence compromise between human and robot partners when they disagree in situation evaluation. Eighty-seven participants viewed urban scenes and interacted with a robot partner to make a threat assessment. We explored the impacts of multiple factors on threat ratings and trust, including how the robot communicated with the person, and whether or not the robot compromised following dialogue. Results showed that participants were open to compromise with the robot, especially when the robot detected threat in a seemingly safe scene. Unexpectedly, dialogue with the robot and hearing robot inner speech reduced compromise and trust, relative to control conditions providing transparency or signaling benevolence. Dialogue may change the human’s perception of the robot’s role in the team, indicating a design challenge for design of future systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI