控制(管理)
计算机科学
知识管理
人工智能
管理科学
管理
工程类
经济
作者
Gudela Grote,Sharon K. Parker,Kevin Crowston
标识
DOI:10.5465/amr.2023.0117
摘要
The growing agency of artificial intelligence (AI) systems, more specifically systems based on machine learning, has raised concerns about the security, safety, and ethical risks of AI use. We argue that core to mitigating AI risks is proper alignment of control and accountability for the stakeholders involved in AI development and use. Control enables, and accountability motivates, stakeholders to achieve desired and avoid undesired outcomes using AI. However, AI systems’ capabilities for autonomous adaptivity reduce control even for the experts who create them. Moreover, increasing interdependencies between AI development and use render it difficult to unambiguously locate control and accountability. In this paper, we address these challenges for mitigating AI risks by postulating decentralized forms of stakeholder governance and integrative negotiations among stakeholders during the AI life cycle as conducive to aligning control and accountability for AI development and use. Further, we specify that extensive information sharing aided by perspective taking and a shared norm of accountability facilitate integrative negotiation strategies. We conclude by discussing the implications of our theory for management scholarship on the impact of AI, and identify promising avenues for future research at micro, meso, and macro levels of analysis.
科研通智能强力驱动
Strongly Powered by AbleSci AI