透明度(行为)
控制(管理)
心理学
计算机科学
认知科学
人工智能
计算机安全
作者
Ryan W. Wohleber,Kimberly Stowers,Michael Barnes,Jessie Y. C. Chen
标识
DOI:10.1016/j.chb.2023.107866
摘要
Multiple unmanned system control is a complex command and control endeavor but pairing human operators with an intelligent agent (IA) teammate can buttress the collection and synthesis of data and improve complex decision making. Effective human-autonomy teams (HATs) require human trust in IA teammates to be properly calibrated, which can be supported by communications pertaining to underlying functions of the IA, or “transparency”. One prominent guide for application of transparency is Chen and colleague's Situation awareness-based Agent Transparency (SAT) model. This effort sought to extend understanding of the application of this model by manipulating secondary transparency communication parameters: face threat (i.e., threat to a person's sense of social standing) and design of transparency communication (verbal, graphical, and iconographical). Results revealed that increasing face threat can improve reliance calibration at low transparency but may be detrimental when transparency is high. Outcomes concerning the method of transparency communication suggest that while verbal communication of transparency information is sufficient and even preferred when a low level of transparency is provided, reliance on graphical and iconographical approaches for presenting transparency information increases at a higher level of transparency.
科研通智能强力驱动
Strongly Powered by AbleSci AI