移情
听诊器
建议(编程)
标签
可靠性(半导体)
心理学
医疗建议
应用心理学
社会心理学
医学
计算机科学
精神科
放射科
功率(物理)
物理
犯罪学
量子力学
程序设计语言
作者
Moritz Reis,Florian Reis,Wilfried Kunde
标识
DOI:10.31234/osf.io/35hn8
摘要
Large Language Models (LLM) offer novel opportunities to seek digital medical advice. While previous research primarily addressed the performance of such artificial intelligence (AI)-based tools, public perception of these advancements received little attention. In two preregistered studies (N = 2,280), we presented participants with scenarios of patients obtaining medical advice. All participants received identical information, but we manipulated the putative source of this advice (“AI”, “Human physician”, “Human + AI”). “AI” and “Human + AI”-labelled advice was evaluated as significantly less reliable and less empathetic compared to “Human”-labelled advice. Moreover, participants indicated lower willingness to follow the advice, when AI was believed to be involved in advice generation. Our findings point toward an anti-AI bias when receiving digital medical advice, even when AI is supposedly supervised by physicians. Given the tremendous potential of AI for medicine, elucidating ways to counteract this bias should be an important objective of future research.
科研通智能强力驱动
Strongly Powered by AbleSci AI