建议(编程)
加权
计算机科学
私人信息检索
算法
运筹学
业务
数学
计算机安全
物理
声学
程序设计语言
作者
Maya Balakrishnan,Kris Ferreira,Jordan Tong
出处
期刊:Management Science
[Institute for Operations Research and the Management Sciences]
日期:2025-03-24
标识
DOI:10.1287/mnsc.2022.03850
摘要
Even if algorithms make better predictions than humans on average, humans may sometimes have private information that an algorithm does not have access to that can improve performance. How can we help humans effectively use and adjust recommendations made by algorithms in such situations? When deciding whether and how to override an algorithm’s recommendations, we hypothesize that people are biased toward following naïve advice-weighting (NAW) behavior; they take a weighted average between their own prediction and the algorithm’s prediction, with a constant weight across prediction instances regardless of whether they have valuable private information. This leads to humans overadhering to the algorithm’s predictions when their private information is valuable and underadhering when it is not. In an online experiment where participants were tasked with making demand predictions for 20 products while having access to an algorithm’s predictions, we confirm this bias toward NAW and find that it leads to a 20%–61% increase in prediction error. In a second experiment, we find that feature transparency—even when the underlying algorithm is a black box—helps users more effectively discriminate how to deviate from algorithms, resulting in a 25% reduction in prediction error. We make further improvements in a third experiment via an intervention designed to move users away from advice weighting and instead, use only their private information to inform deviations, leading to a 34% reduction in prediction error. This paper has been This paper was accepted by Elena Katok for the Special Issue on the Human-Algorithm Connection. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2022.03850 .
科研通智能强力驱动
Strongly Powered by AbleSci AI