心理学
危害
社会心理学
功率(物理)
物理
量子力学
作者
Jonathan Gratch,Nathanael J. Fast
标识
DOI:10.1016/j.copsyc.2022.101382
摘要
Advances in artificial intelligence (AI) enable new ways of exercising and experiencing power by automating interpersonal tasks such as interviewing and hiring workers, managing and evaluating work, setting compensation, and negotiating deals. As these techniques become more sophisticated, they increasingly support personalization where users can "tell" their AI assistants not only what to do, but how to do it: in effect, dictating the ethical values that govern the assistant's behavior. Importantly, these new forms of power could bypass existing social and regulatory checks on unethical behavior by introducing a new agent into the equation. Organization research suggests that acting through human agents (i.e., the problem of indirect agency) can undermine ethical forecasting such that actors believe they are acting ethically, yet a) show less benevolence for the recipients of their power, b) receive less blame for ethical lapses, and c) anticipate less retribution for unethical behavior. We review a series of studies illustrating how, across a wide range of social tasks, people may behave less ethically and be more willing to deceive when acting through AI agents. We conclude by examining boundary conditions and discussing potential directions for future research.
科研通智能强力驱动
Strongly Powered by AbleSci AI