包裹体(矿物)
复制
审计
包含-排除原则
计算机科学
选择(遗传算法)
关系(数据库)
人工智能
机器学习
心理学
社会心理学
管理
数据挖掘
政治学
经济
数学
法学
统计
政治
标识
DOI:10.1111/1748-8583.12511
摘要
Abstract Despite frequent claims that increased use of artificial intelligence (AI) in hiring will reduce the human bias that has long plagued recruitment and selection, AI may equally replicate and amplify such bias and embed it in technology. This article explores exclusion and inclusion in AI‐supported hiring, focusing on three interrelated areas: data, design and decisions. It is suggested that in terms of data, organisational fit, categorisations and intersectionality require consideration in relation to exclusion. As various stakeholders collaborate to create AI, it is essential to explore which groups are dominant and how subjective assessments are encoded in technology. Although AI‐supported hiring should enhance recruitment decisions, evidence is lacking on how humans and machines interact in decision‐making, and how algorithms can be audited and regulated effectively for inclusion. This article recommends areas for interrogation through further research, and contributes to understanding how algorithmic inclusion can be achieved in AI‐supported hiring.
科研通智能强力驱动
Strongly Powered by AbleSci AI