规范性
透明度(行为)
背景(考古学)
公共经济学
情感(语言学)
公共政策
测量数据收集
经济
平衡(能力)
实证经济学
政治学
社会学
心理学
经济增长
法学
古生物学
统计
数学
沟通
神经科学
生物
标识
DOI:10.1080/13501763.2022.2094988
摘要
Citizens' attitudes concerning aspects of AI such as transparency, privacy, and discrimination have received considerable attention. However, it is an open question to what extent economic consequences affect preferences for public policies governing AI. When does the public demand imposing restrictions on – or even prohibiting – emerging AI technologies? Do average citizens' preferences depend causally on normative and economic concerns or only on one of these causes? If both, how might economic risks and opportunities interact with assessments based on normative factors? And to what extent does the balance between the two kinds of concerns vary by context? I answer these questions using a comparative conjoint survey experiment conducted in Germany, the United Kingdom, India, Chile, and China. The data analysis suggests strong effects regarding AI systems' economic and normative attributes. Moreover, I find considerable cross-country variation in normative preferences regarding the prohibition of AI systems vis-a-vis economic concerns.
科研通智能强力驱动
Strongly Powered by AbleSci AI