问责
透明度(行为)
衡平法
决策者
人工智能
计算机科学
心理学
知识管理
管理科学
经济
政治学
计算机安全
法学
作者
ChangHyun Lee,Kyung Jin
标识
DOI:10.1016/j.ijhcs.2022.102976
摘要
Because artificial intelligence (AI) recruitment systems exhibited discriminatory decisions in recent applications, the adoption of such systems in industry has raised doubts. As equity has been emphasized in AI decision-making frameworks, the non-explainability issue regarding the high performance of AI methods has become prominent. Therefore, scholars have focused on human–AI augmentation in which humans consider equity and AI supports the consideration. As a result, explainability is highlighted as a new capability of AI methods for an ideal decision. In this regard, this study proposes the so-called fairness, accountability, and transparency (FAT)-complexity, anxiety, and trust (CAT) model that describes the path from explainability to AI system adoption considering augmentation, assuming that the capability of the AI decision maker to explain the basis of its decision and interact with the human decision maker is crucial for AI recruitment system adoption. We found that explainability and augmentation are two key factors in AI recruitment system adoption and assessed that their importance will gradually increase as recruiters will be asked to use such AI systems more commonly. Moreover, this study conceptualized the role of an augmented relationship between humans and AI in decision-making, in which they complement each other's limitations.
科研通智能强力驱动
Strongly Powered by AbleSci AI