人力资源管理
知识管理
可解释性
杠杆(统计)
透明度(行为)
人力资源
感知
计算机科学
心理学
人工智能
计算机安全
管理
经济
神经科学
作者
Hyanghee Park,Daehwan Ahn,Kartik Hosanagar,Joonhwan Lee
标识
DOI:10.1145/3411764.3445304
摘要
Recently, Artificial Intelligence (AI) has been used to enable efficient decision-making in managerial and organizational contexts, ranging from employment to dismissal. However, to avoid employees’ antipathy toward AI, it is important to understand what aspects of AI employees like and/or dislike. In this paper, we aim to identify how employees perceive current human resource (HR) teams and future algorithmic management. Specifically, we explored what factors negatively influence employees’ perceptions of AI making work performance evaluations. Through in-depth interviews with 21 workers, we found that 1) employees feel six types of burdens (i.e., emotional, mental, bias, manipulation, privacy, and social) toward AI's introduction to human resource management (HRM), and that 2) these burdens could be mitigated by incorporating transparency, interpretability, and human intervention to algorithmic decision-making. Based on our findings, we present design efforts to alleviate employees’ burdens. To leverage AI for HRM in fair and trustworthy ways, we call for the HCI community to design human-AI collaboration systems with various HR stakeholders.
科研通智能强力驱动
Strongly Powered by AbleSci AI