意志(语言学)
感知
可信赖性
社会心理学
代理意识
代理(哲学)
人工智能应用
心理学
人工智能
计算机科学
认识论
语言学
哲学
神经科学
作者
Bart Vanneste,Phanish Puranam
出处
期刊:Social Science Research Network
[Social Science Electronic Publishing]
日期:2021-01-01
被引量:7
摘要
The literature on trust among humans requires that the trustee is seen to act with agency, else trust is undefined. In contrast, the literature on confidence in technology does not require that the technology we make ourselves vulnerable to is perceived to have any volition. Modern artificial intelligence (AI) technologies are distinctive in that they are often perceived as agentic to varying degrees, typically more agentic than other technologies but less than humans. We theorize how different levels of perceived agency of the AI affects human trust in AI through three mechanisms. First, a more agentic seeming AI as well as its designer appear more able, and therefore more trustworthy. Second, the trustworthiness perceptions about the AI become more important relative to those about its designer, if the AI is seen as more agentic. Third, the anticipated psychological cost of the AI violating trust increases if the AI is seen as more agentic because of betrayal aversion. These mechanisms imply that making an AI appear more agentic may increase or decrease the trust that humans place in it, and also explain why some interventions to improve human trust in AI are likely to be more robust than others.
科研通智能强力驱动
Strongly Powered by AbleSci AI