责备
危害
心理学
社会心理学
代理(哲学)
代理意识
归属
记忆的错误归因
感觉
感知
愉快
认知
认识论
哲学
神经科学
作者
Yulia Sullivan,Samuel Fosso Wamba
标识
DOI:10.1007/s10551-022-05053-w
摘要
The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence (AI) system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the theory of mind perception, we hypothesized that two dimensions of mind: perceived agency—attributing intention, reasoning, pursuing goals, and communicating to AI, and perceived experience—attributing emotional states, such as feeling pain and pleasure, personality, and consciousness to AI—mediated the relationship between perceived intentional harm and blame judgments toward AI. We also predicted that people are likely to attribute higher mind characteristics to AI when harm is perceived to be directed to humans than when it is perceived to be directed to non-humans. We tested our research model in three experiments. In all experiments, we found that perceived intentional harm led to blame judgments toward AI. In two experiments, we found perceived experience, not agency, mediated the relationship between perceived intentional harm and blame judgments. We also found that companies and developers were held responsible for moral violations involving AI, with developers received the most blame among the entities involved. Our third experiment reconciles the findings by showing that perceived intentional harm directed to a non-human entity did not lead to increased attributions of mind to AI. These findings have implications for theory and practice concerning unethical outcomes and behavior associated with AI use.
科研通智能强力驱动
Strongly Powered by AbleSci AI