可信赖性
政府(语言学)
公共关系
公众信任
顺从(心理学)
公共政策
心理学
政治学
互联网隐私
社会心理学
计算机科学
法学
语言学
哲学
作者
Bjorn Kleizen,Wouter Van Dooren,Koen Verhoest,Evrim Tan
标识
DOI:10.1016/j.giq.2023.101834
摘要
This study examines the impact of ethical AI information on citizens' trust in and policy support for governmental AI projects. Unlike previous work on direct users of AI, this study focuses on the general public. Two online survey experiments presented participants with information on six types of ethical AI measures: legal compliance, ethics-by-design measures, data-gathering limitations, human-in-the-loop, non-discrimination, and technical robustness. Results reveal that general ethical AI information has little to no effect on trust, perceived trustworthiness or policy support among citizens. Prior attitudes and experiences, including privacy concerns, trust in government, and trust in AI, instead form good predictors. These findings suggest that short-term communication efforts on ethical AI practices have minimal impact. The findings suggest that a more long-term, comprehensive approach is necessary to building trust in governmental AI projects, addressing citizens' underlying concerns and experiences. As governments' use of AI becomes more ubiquitous, understanding citizen responses is crucial for fostering trust, perceived trustworthiness and policy support for AI-based policies and initiatives.
科研通智能强力驱动
Strongly Powered by AbleSci AI