期刊:Management Science [Institute for Operations Research and the Management Sciences] 日期:2025-02-17
标识
DOI:10.1287/mnsc.2022.03510
摘要
Failing to follow expert advice can have real and dangerous consequences. While any number of factors may lead a decision maker to refuse expert advice, the proliferation of algorithmic experts has further complicated the issue. One potential mechanism that restricts the acceptance of expert advice is betrayal aversion, or the strong dislike for the violation of trust norms. This study explores whether the introduction of expert algorithms in place of human experts can attenuate betrayal aversion and lead to higher overall rates of seeking expert advice. In other words, we ask: are decision makers averse to algorithmic betrayal? The answer to this question is uncertain ex ante. We answer this question through an experimental financial market where there is an identical risk of betrayal from either a human or algorithmic financial advisor. We find that the willingness to delegate to human experts is significantly reduced by betrayal aversion, while no betrayal aversion is exhibited toward algorithmic experts. The impact of betrayal aversion toward financial advisors is considerable: the resulting unwillingness to take the advice of the human expert leads to a 20% decrease in subsequent earnings, while no loss in earnings is observed in the algorithmic expert condition. This study has significant implications for firms, policymakers, and consumers, specifically in the financial services industry. This paper has been This paper was accepted by D. J. Wu for the Special Issue on the Human-Algorithm Connection. Funding: This work was supported by National Science Foundation [Grant 1541105]. Supplemental Material: The data files are available at https://doi.org/10.1287/mnsc.2022.03510 .