透明度(行为)
计算机科学
预测能力
人工智能
算法
机器学习
计算机安全
认识论
哲学
作者
Qiaochu Wang,Yan Huang,Stefanus Jasin,Param Vir Singh
出处
期刊:Management Science
[Institute for Operations Research and the Management Sciences]
日期:2022-07-05
卷期号:69 (4): 2297-2317
被引量:12
标识
DOI:10.1287/mnsc.2022.4475
摘要
Should firms that apply machine learning algorithms in their decision making make their algorithms transparent to the users they affect? Despite the growing calls for algorithmic transparency, most firms keep their algorithms opaque, citing potential gaming by users that may negatively affect the algorithm’s predictive power. In this paper, we develop an analytical model to compare firm and user surplus with and without algorithmic transparency in the presence of strategic users and present novel insights. We identify a broad set of conditions under which making the algorithm transparent actually benefits the firm. We show that, in some cases, even the predictive power of the algorithm can increase if the firm makes the algorithm transparent. By contrast, users may not always be better off under algorithmic transparency. These results hold even when the predictive power of the opaque algorithm comes largely from correlational features and the cost for users to improve them is minimal. We show that these insights are robust under several extensions of the main model. Overall, our results show that firms should not always view manipulation by users as bad. Rather, they should use algorithmic transparency as a lever to motivate users to invest in more desirable features. This paper was accepted by D. J. Wu, information systems. Supplemental Material: The online appendix is available at https://doi.org/10.1287/mnsc.2022.4475 .
科研通智能强力驱动
Strongly Powered by AbleSci AI