计算机科学
合理化(经济学)
理性
多样性(控制论)
生成语法
人工智能
集合(抽象数据类型)
偏爱
机器学习
贝叶斯概率
数学
哲学
统计
认识论
政治学
法学
程序设计语言
作者
Scott A. Humr,M. Canan,Mustafa Demir
标识
DOI:10.1177/21695067231193672
摘要
AI is set to take over some tasks within the decision space that have traditionally been reserved for humans. In return, human decision-makers interacting with AI systems entails rationalization of AI outputs by humans, who may have difficulty forming trust around such AI-generated information. Although a variety of analytical methods have provided some insights into human trust in AI, a more comprehensive understanding of trust may be augmented by generative theories that capture the temporal evolution of trust. Therefore, an open system modeling approach, representing trust as a function of time with a single probability distribution, can potentially improve modeling human trust in an AI system. Results of this study could improve machine behaviors that may help steer a human’s preference to a more Bayesian optimal rationality which is useful in stressful decision-making scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI