碰撞
算法
计算机科学
情感(语言学)
心理学
社会心理学
社会认可
互联网隐私
计算机安全
沟通
作者
Yeon Kyoung Joo,Banya Kim
标识
DOI:10.1080/10447318.2022.2102716
摘要
An online experiment was conducted to investigate how perceived collision algorithm types (selfish vs. utilitarian) and social approval for the algorithms (weak approval vs. strong approval) jointly affect individuals’ attitudes toward automated vehicles (AVs). The results showed a discrepancy between what individuals consider socially desirable and what they trust or would use. Although participants regarded AVs with utilitarian collision algorithms as more ethical and socially beneficial, they personally preferred AVs with selfish algorithms: they trusted selfish AVs more and showed higher intention to use and pay a premium for selfish AVs. Also, participants evaluated AVs as more ethical and socially beneficial when strong social approval was given to the algorithms. However, in utilitarian algorithms, strong social approval did not increase trust or behavioral intention to use AVs. Only strong social approval of selfish algorithms increased participants’ trust and intention to use AVs.
科研通智能强力驱动
Strongly Powered by AbleSci AI