心理学
医疗保健
渐晕
感知
资源配置
社会心理学
主题分析
结果(博弈论)
非人性化
知识管理
计算机科学
定性研究
社会学
政治学
计算机网络
社会科学
数学
数理经济学
神经科学
人类学
法学
作者
Paul Formosa,Wendy Rogers,Yannick Griep,Sarah Bankins,Deborah Richards
标识
DOI:10.1016/j.chb.2022.107296
摘要
Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients' perceptions of (un)dignified treatment. We explore this issue through an experimental vignette study comparing individuals' perceptions of being treated in a dignified and respectful way in various healthcare decision contexts. Participants were subject to a 2 (human or AI decision maker) x 2 (positive or negative decision outcome) x 2 (diagnostic or resource allocation healthcare scenario) factorial design. We found evidence of a "human bias" (i.e., a preference for human over AI decision makers) and an "outcome bias" (i.e., a preference for positive over negative outcomes). However, we found that for perceptions of respectful and dignified interpersonal treatment, it matters more who makes the decisions in diagnostic cases and it matters more what the outcomes are for resource allocation cases. We also found that humans were consistently viewed as appropriate decision makers and AI was viewed as dehumanizing, and that participants perceived they were treated better when subject to diagnostic as opposed to resource allocation decisions. Thematic coding of open-ended text responses supported these results. We also outline the theoretical and practical implications of these findings.
科研通智能强力驱动
Strongly Powered by AbleSci AI