不信任
可信赖性
选择(遗传算法)
心理学
人员选择
人力资源管理
审计
启发式
社会心理学
人力资源
组织公正
计算机科学
诚实
应用心理学
经济正义
包裹体(矿物)
多样性(政治)
知识管理
人工智能
质量(理念)
人力资源管理系统
工作表现
组织行为学
工作组
作者
Mads Nordmo Arnestad,Yochanan Bigman,Elizabeth Solberg,Kurt Gray
摘要
ABSTRACT As the performance of artificial intelligence (AI) enabled algorithms to improve, so too increases the potential for them to be used to increase the efficiency and effectiveness of human resource management (HRM) decisions. Yet, public distrust in AI algorithms could keep organizations from using this technology to improve HRM decision‐making. Here, we examine one factor that may influence the perceived trustworthiness of AI algorithms used in HRM, specifically those used in personnel selection decisions. Drawing from organizational justice and trust theories, we posit that knowledge of how the algorithm compares with human recruiters in terms of hiring members of traditionally discriminated demographic groups serves as a fairness heuristic that affects the algorithm's perceived trustworthiness by increasing its perceived ability, benevolence, and integrity. In three experimental studies ( N = 1382), we show that when people are informed that an algorithm used in personnel selection results in more women or racial minorities being hired, as compared to selection decisions made by human recruiters, they perceive it as having higher ability, benevolence and integrity, and are more willing to adopt it and to follow its recommendations. The opposite is true when the algorithm is said to decrease the number of women and racial minorities being hired. Our research suggests that auditing personnel selection decisions made using AI algorithms and communicating how they compare with human recruiters in terms of their diversity, equity, and inclusion outcomes is important for the perceived trustworthiness and public acceptance of this technology.
科研通智能强力驱动
Strongly Powered by AbleSci AI