问责
知识管理
工程管理
工程伦理学
计算机科学
工程类
过程管理
政治学
法学
作者
Jan-Hendrik Schmidt,Sebastian Clemens Bartsch,Martin Adam,Alexander Benlian
标识
DOI:10.1007/s12599-024-00914-2
摘要
Abstract The increasing proliferation of artificial intelligence (AI) systems presents new challenges for the future of information systems (IS) development, especially in terms of holding stakeholders accountable for the development and impacts of AI systems. However, current governance tools and methods in IS development, such as AI principles or audits, are often criticized for their ineffectiveness in influencing AI developers’ attitudes and perceptions. Drawing on construal level theory and Toulmin’s model of argumentation, this paper employed a sequential mixed method approach to integrate insights from a randomized online experiment (Study 1) and qualitative interviews (Study 2). This combined approach helped us investigate how different types of accountability arguments affect AI developers’ accountability perceptions. In the online experiment, process accountability arguments were found to be more effective than outcome accountability arguments in enhancing AI developers’ perceived accountability. However, when supported by evidence, both types of accountability arguments prove to be similarly effective. The qualitative study corroborates and complements the quantitative study’s conclusions, revealing that process and outcome accountability emerge as distinct theoretical constructs in AI systems development. The interviews also highlight critical organizational and individual boundary conditions that shape how AI developers perceive their accountability. Together, the results contribute to IS research on algorithmic accountability and IS development by revealing the distinct nature of process and outcome accountability while demonstrating the effectiveness of tailored arguments as governance tools and methods in AI systems development.
科研通智能强力驱动
Strongly Powered by AbleSci AI