自治
任务(项目管理)
心理学
可信赖性
过程(计算)
社会心理学
感知
社会关系
脆弱性(计算)
计算机科学
计算机安全
政治学
操作系统
经济
神经科学
管理
法学
作者
August Capiola,Joseph B. Lyons,Kara Harris,Izz aldin Hamdan,Siva Kailas,Katia Sycara
标识
DOI:10.1016/j.chb.2023.107966
摘要
Autonomy is becoming increasingly integrated into everyday life. For humans to fully realize the benefits of working alongside autonomy, appropriate trust toward autonomous partners will be necessary. However, research is needed to determine how humans respond to autonomous partners when they behave (un)expectedly. Thus, a series of studies were designed to investigate the effect of framed social intent of an autonomous teammate (Study 1), their unstated behavioral manifestations (Study 2), and the interaction of these variables (Study 3) on participants' trustworthiness perceptions, reliance intentions, and trust behaviors. Participants were to imagine themselves partnering with an autonomous teammate in a team-based, gamified collaboration. Key innovations in this task involved role-based vulnerability (necessitating clear expectations regarding one's stated social intent) and teammate interdependence. Across studies, framed social intent and observable behaviors of an autonomous agent were manipulated. Results demonstrated robust effects of said manipulations, with interactions demonstrating the nuance of (un)met expectations on criteria. We conclude by offering future research and design suggestions aimed at enhancing human-autonomy interactions.
科研通智能强力驱动
Strongly Powered by AbleSci AI