危害
拒绝
心理学
任务(项目管理)
组长
计算机科学
社会心理学
工程伦理学
管理
工程类
心理治疗师
经济
作者
Beau G. Schelble,Jeremy Lopez,Claire Textor,Rui Zhang,Nathan J. McNeese,Richard Pak,Guo Freeman
出处
期刊:Human Factors
[SAGE]
日期:2022-08-06
卷期号:66 (4): 1037-1055
被引量:19
标识
DOI:10.1177/00187208221116952
摘要
Objective Determining the efficacy of two trust repair strategies (apology and denial) for trust violations of an ethical nature by an autonomous teammate. Background While ethics in human-AI interaction is extensively studied, little research has investigated how decisions with ethical implications impact trust and performance within human-AI teams and their subsequent repair. Method Forty teams of two participants and one autonomous teammate completed three team missions within a synthetic task environment. The autonomous teammate made an ethical or unethical action during each mission, followed by an apology or denial. Measures of individual team trust, autonomous teammate trust, human teammate trust, perceived autonomous teammate ethicality, and team performance were taken. Results Teams with unethical autonomous teammates had significantly lower trust in the team and trust in the autonomous teammate. Unethical autonomous teammates were also perceived as substantially more unethical. Neither trust repair strategy effectively restored trust after an ethical violation, and autonomous teammate ethicality was not related to the team score, but unethical autonomous teammates did have shorter times. Conclusion Ethical violations significantly harm trust in the overall team and autonomous teammate but do not negatively impact team score. However, current trust repair strategies like apologies and denials appear ineffective in restoring trust after this type of violation. Application This research highlights the need to develop trust repair strategies specific to human-AI teams and trust violations of an ethical nature.
科研通智能强力驱动
Strongly Powered by AbleSci AI