计算机科学
组分(热力学)
可信赖性
人工智能应用
代表(政治)
职位(财务)
公众信任
明示信任
人工智能
知识管理
心理学
业务
公共关系
政治学
计算机安全
法学
物理
热力学
政治
财务
作者
Nessrine Omrani,Giorgia Rivieccio,Ugo Fiore,Francesco Schiavone,Sergio García-Agreda
标识
DOI:10.1016/j.techfore.2022.121763
摘要
Artificial intelligence (AI) characterizes a new generation of technologies capable of interacting with the environment and aiming to simulate human intelligence. The success of integrating AI into organizations critically depends on workers' trust in AI technology. Trust is a central component of the interaction between people and AI, as incorrect levels of trust may cause misuse, abuse or disuse of the technology. The European Commission's High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI. This article investigates the links between trust in AI, concerns related to AI use, and the ethics related to such use. We used data collected in 2019 from more than 30,000 individuals across the EU28. The data focuses on living conditions, trust, and AI uses and concerns. An econometric model is used. The endogenous variable is an ordered measure of trust in AI. We use an ordered logit model to highlight the factors associated with an increased level of trust in AI in Europe. The results show that many concerns related to AI use are linked to AI trust, and the ability to try out AI applications will also have an impact on initial trust. To enhance trust, practitioners can try to maximize the technological features in AI systems. The representation of the AI as a humanoid or a loyal pet (e.g., a dog) will facilitate initial trust formation. Moreover, findings reveal an unequal degree of trust in AI across countries.
科研通智能强力驱动
Strongly Powered by AbleSci AI