判断
心理学
感知
社会心理学
选择(遗传算法)
经济正义
结果(博弈论)
道德
算法
计算机科学
人工智能
认识论
法学
数学
数理经济学
神经科学
政治学
哲学
作者
Tina Feldkamp,Markus Langer,Leo Wies,Cornelius J. König
标识
DOI:10.1080/1359432x.2023.2169140
摘要
ABSTRACTABSTRACTAlthough algorithm-based systems are increasingly used as a decision-support for managers, there is still a lack of research on the effects of algorithm use and more specifically on potential algorithmic bias on decision-makers. To investigate how potential social bias in a recommendation outcome influences trust, fairness perceptions, and moral judgement, we used a moral dilemma scenario. Participants (N = 215) imagined being human resource managers responsible for personnel selection and receiving decision-support from either human colleagues or an algorithm-based system. They received an applicant preselection that was either gender-balanced or predominantly male. Although participants perceived algorithm-based support as less biased, they also perceived it as generally less fair and had less trust in it. This could be related to the finding that participants perceived algorithm-based systems as more consistent but also as less likely to uphold moral standards. Moreover, participants tended to reject algorithm-based preselection more often than human-based and were more likely to use utilitarian judgements when accepting it, which may indicate different underlying moral judgement processes.KEYWORDS: artificial intelligencepersonnel selectiontrustjusticemoral judgement Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingNo funds, grants, or other support was received.
科研通智能强力驱动
Strongly Powered by AbleSci AI