欺骗
心理学
妥协
透明度(行为)
规范性
社会心理学
质量(理念)
互联网隐私
认识论
计算机科学
社会学
计算机安全
社会科学
哲学
作者
Zoë A. Purcell,Mengchen Dong,Anne‐Marie Nussberger,Nils Köbis,Maurice Jakesch
摘要
Abstract Artificial intelligence (AI) can enhance human communication, for example, by improving the quality of our writing, voice or appearance. However, AI mediated communication also has risks—it may increase deception, compromise authenticity or yield widespread mistrust. As a result, both policymakers and technology firms are developing approaches to prevent and reduce potentially unacceptable uses of AI communication technologies. However, we do not yet know what people believe is acceptable or what their expectations are regarding usage. Drawing on normative psychology theories, we examine people's judgements of the acceptability of open and secret AI use, as well as people's expectations of their own and others' use. In two studies with representative samples (Study 1: N = 477; Study 2: N = 765), we find that people are less accepting of secret than open AI use in communication, but only when directly compared. Our results also suggest that people believe others will use AI communication tools more than they would themselves and that people do not expect others' use to align with their expectations of what is acceptable. While much attention has been focused on transparency measures, our results suggest that self‐other differences are a central factor for understanding people's attitudes and expectations for AI‐mediated communication.
科研通智能强力驱动
Strongly Powered by AbleSci AI