Explainability and artificial intelligence in medicine

人工智能 计算机科学
作者
Sandeep Reddy
出处
期刊:The Lancet Digital Health [Elsevier]
卷期号:4 (4): e214-e215 被引量:92
标识
DOI:10.1016/s2589-7500(22)00029-2
摘要

In recent years, improved artificial intelligence (AI) algorithms and access to training data have led to the possibility of AI augmenting or replacing some of the current functions of physicians.1Reddy S Fox J Purohit MP Artificial intelligence-enabled healthcare delivery.J R Soc Med. 2019; 112: 22-28Crossref PubMed Scopus (102) Google Scholar However, interest from various stakeholders in the use of AI in medicine has not translated to widespread adoption.2Kelly CJ Karthikesalingam A Suleyman M Corrado G King D Key challenges for delivering clinical impact with artificial intelligence.BMC Med. 2019; 17: 195Crossref PubMed Scopus (372) Google Scholar As many experts have stated, one of the key reasons for this restricted uptake is the scarce transparency associated with specific AI algorithms, especially black-box algorithms.3Amann J Blasimme A Vayena E Frey D Madai VI Precise Q Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.BMC Med Inform Decis Mak. 2020; 20: 310Crossref PubMed Scopus (95) Google Scholar Clinical medicine, primarily evidence-based medical practice, relies on transparency in decision making.3Amann J Blasimme A Vayena E Frey D Madai VI Precise Q Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.BMC Med Inform Decis Mak. 2020; 20: 310Crossref PubMed Scopus (95) Google Scholar, 4Kundu S AI in medicine must be explainable.Nat Med. 2021; 271328Crossref Scopus (26) Google Scholar, 5Yoon CH Torrance R Scheinerman N Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?.J Med Ethics. 2021; (medethics-2020-107102.)Crossref PubMed Scopus (6) Google Scholar If there is no medically explainable AI and the physician cannot reasonably explain the decision-making process, the patient's trust in them will erode. To address the transparency issue with certain AI models, explainable AI has emerged.3Amann J Blasimme A Vayena E Frey D Madai VI Precise Q Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.BMC Med Inform Decis Mak. 2020; 20: 310Crossref PubMed Scopus (95) Google Scholar In The Lancet Digital Health, Marzyeh Ghassemi and colleagues6Ghassemi M Oakden-Rayner L Beam AL The false hope of current approaches to explainable artificial intelligence in health care.Lancet Digit Health. 2021; 3: e745-e750Summary Full Text Full Text PDF PubMed Scopus (44) Google Scholar have argued that explainable AI applications that are currently available are imperfect and provide only a partial explanation of the inner workings of AI algorithms. They have called for stakeholders to move away from insisting on explainability and to seek other measures, like validation, to enable trust and confidence in black-box models. There is some validity in their criticism of certain explainable frameworks, like post-hoc explainers. These explainers mostly approximate the underlying machine learning mechanisms to explain the decision making. However, based on the limitations of certain explainable AI methods, the argument to restrict explainable AI and prioritise other validation approaches, like randomised controlled trials, is specious. Models or systems whose decisions cannot be well interpreted can be hard to accept,7Vellido A Societal issues concerning the application of artificial intelligence in medicine.Kidney Dis. 2019; 5: 11-17Crossref Google Scholar especially in fields like medicine.4Kundu S AI in medicine must be explainable.Nat Med. 2021; 271328Crossref Scopus (26) Google Scholar Reliance on the logic of black-box models violates medical ethics. Black-box medical practice hinders clinicians from assessing the quality of model inputs and parameters. If clinicians cannot understand the decision making, they might be violating patients’ rights to informed consent and autonomy.4Kundu S AI in medicine must be explainable.Nat Med. 2021; 271328Crossref Scopus (26) Google Scholar, 5Yoon CH Torrance R Scheinerman N Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?.J Med Ethics. 2021; (medethics-2020-107102.)Crossref PubMed Scopus (6) Google Scholar When clinicians cannot decipher how the results were arrived at, it is unlikely that they will be able to communicate and disclose with the patient appropriately, thus affecting the patient's autonomy and ability to engage in informed consent. Increasingly, there have been examples of high performing black-box models that have been caught using wrong or confounding variables to achieve their results. For example, patients with asthma were found by a deep learning model to be at low risk of death by pneumonia because the model learnt from a training dataset that included a group of patients with asthma who had active intervention from clinicians.8Cutillo CM Sharma KR Foschini L Kundu S Mackintosh M Mandl KD Machine intelligence in healthcare-perspectives on trustworthiness, explainability, usability, and transparency.NPJ Digit Med. 2020; 3: 47Crossref PubMed Scopus (48) Google Scholar In another example, a deep learning model developed to screen x-rays for pneumonia used confounding information like the scanner's location to detect pneumonia.8Cutillo CM Sharma KR Foschini L Kundu S Mackintosh M Mandl KD Machine intelligence in healthcare-perspectives on trustworthiness, explainability, usability, and transparency.NPJ Digit Med. 2020; 3: 47Crossref PubMed Scopus (48) Google Scholar In a third example, a deep learning model developed to distinguish high-risk patients from lower-risk patients, based on x-rays, used hardware-related metadata to predict the risk.3Amann J Blasimme A Vayena E Frey D Madai VI Precise Q Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.BMC Med Inform Decis Mak. 2020; 20: 310Crossref PubMed Scopus (95) Google Scholar These cases suggest that reliance on the accuracy of the models is insufficient. Additional trust enhancing frameworks, like explainable AI, are required. Although criticism of explainable AI methods has been growing in recent years, there seems to be astonishingly little scrutiny of what led to the need for explainable AI: deep learning models. Such models have no explicit declarative knowledge representation, which poses a challenge in deriving an explanatory narrative. Many high performing deep learning models have millions or even billions of parameters that are only identifiable by their location in a complex network, not as human interpretable labels, leading to the black-box situation.9Marcus G Deep learning: a critical appraisal.arXiv. 2018; (published online Jan 2.) (preprint).https://arxiv.org/abs/1801.00631Google Scholar Also, many deep learning models that do well on training datasets do not do well on independent datasets. Further, deep learning algorithms require a large amount of data to be trained for both interpolation and extrapolation. These issues with deep learning models have yet to be meaningfully resolved and persist in various applications, including in medicine. Critics of explainable AI have argued for the prioritisation of validity measures over explainability frameworks.6Ghassemi M Oakden-Rayner L Beam AL The false hope of current approaches to explainable artificial intelligence in health care.Lancet Digit Health. 2021; 3: e745-e750Summary Full Text Full Text PDF PubMed Scopus (44) Google Scholar, 8Cutillo CM Sharma KR Foschini L Kundu S Mackintosh M Mandl KD Machine intelligence in healthcare-perspectives on trustworthiness, explainability, usability, and transparency.NPJ Digit Med. 2020; 3: 47Crossref PubMed Scopus (48) Google Scholar The rationale is that, currently, many drugs and medical devices adopt validation processes (such as randomised controlled trials) to indicate efficacy, and so should AI-enabled medical devices or software. However, we believe this argument is inappropriate. Generally, the performance of AI systems is assessed on prediction accuracy measures.10Desai AN Artificial intelligence: promise, pitfalls, and perspective.JAMA. 2020; 323: 2448-2449Crossref PubMed Scopus (15) Google Scholar Even with the best efforts, AI systems are unlikely to achieve perfect accuracy due to different sources of errors.3Amann J Blasimme A Vayena E Frey D Madai VI Precise Q Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.BMC Med Inform Decis Mak. 2020; 20: 310Crossref PubMed Scopus (95) Google Scholar If perfect accuracy was achieved theoretically, there is no guarantee that the AI system is still free of biases—especially when the systems have been trained with heterogeneous and complex data, as occurs in medicine. Ignoring or restricting explainable AI is detrimental to the adoption of AI in medicine as few alternatives exist that can comprehensively respond to accountability, trust, and regulatory concerns while engendering confidence and transparency in the AI technology. The use of explainable frameworks could help to align model performance with clinical guidelines objectives.3Amann J Blasimme A Vayena E Frey D Madai VI Precise Q Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.BMC Med Inform Decis Mak. 2020; 20: 310Crossref PubMed Scopus (95) Google Scholar Therefore, enabling better adoption of AI models in clinical practice. Transparent algorithms or explanatory approaches can also make the adoption of AI systems less risky for clinical practitioners.2Kelly CJ Karthikesalingam A Suleyman M Corrado G King D Key challenges for delivering clinical impact with artificial intelligence.BMC Med. 2019; 17: 195Crossref PubMed Scopus (372) Google Scholar, 3Amann J Blasimme A Vayena E Frey D Madai VI Precise Q Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.BMC Med Inform Decis Mak. 2020; 20: 310Crossref PubMed Scopus (95) Google Scholar There are already an increasing number of examples of how explainable frameworks in various medical specialities enhance transparency and insight.8Cutillo CM Sharma KR Foschini L Kundu S Mackintosh M Mandl KD Machine intelligence in healthcare-perspectives on trustworthiness, explainability, usability, and transparency.NPJ Digit Med. 2020; 3: 47Crossref PubMed Scopus (48) Google Scholar These case studies can guide the integration of explainable AI with AI medical systems. Through this integration, a second level of explainability and multiple benefits can be achieved, including higher interpretability, better comprehension for clinicians leading to evidence-based practice, and improved clinical outcomes (figure). I hold directorship and shares in Medical Artificial Intelligence Pty Ltd. The false hope of current approaches to explainable artificial intelligence in health careThe black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care workforce, provide transparency into the AI decision making process, and potentially mitigate various kinds of bias. In this Viewpoint, we argue that this argument represents a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support. Full-Text PDF Open Access
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
1秒前
笨笨凡松完成签到 ,获得积分10
2秒前
2秒前
程程程完成签到,获得积分10
3秒前
LSS完成签到,获得积分10
4秒前
肥羊七号完成签到 ,获得积分10
5秒前
四公子未敢言完成签到,获得积分10
5秒前
静心完成签到,获得积分10
5秒前
zxp完成签到,获得积分10
6秒前
坚强白玉完成签到,获得积分10
6秒前
~~完成签到,获得积分10
7秒前
小牛同志完成签到,获得积分10
7秒前
yanxuhuan完成签到,获得积分10
8秒前
8秒前
迪迦奥特曼完成签到,获得积分10
8秒前
小惊麟完成签到,获得积分10
9秒前
核桃小小苏完成签到,获得积分10
9秒前
芳腻爱学习完成签到,获得积分10
9秒前
葡萄完成签到,获得积分10
9秒前
左岸完成签到,获得积分10
9秒前
希望天下0贩的0应助Lydia采纳,获得10
10秒前
10秒前
飞哥完成签到 ,获得积分10
11秒前
abcdefg完成签到,获得积分10
11秒前
高山我梦完成签到,获得积分10
11秒前
积极的尔竹完成签到,获得积分10
12秒前
火星上白羊完成签到,获得积分10
12秒前
好困举报来来求助涉嫌违规
13秒前
韩凡发布了新的文献求助10
13秒前
丰盛的煎饼应助春江采纳,获得20
13秒前
cenzy完成签到,获得积分10
13秒前
Seventeen发布了新的文献求助10
13秒前
CaoRouLi完成签到,获得积分10
14秒前
15秒前
善良筮完成签到,获得积分10
15秒前
zz完成签到,获得积分10
16秒前
薄荷小姐完成签到 ,获得积分10
16秒前
17秒前
17秒前
高分求助中
Evolution 10000
Sustainability in Tides Chemistry 2800
юрские динозавры восточного забайкалья 800
English Wealden Fossils 700
An Introduction to Geographical and Urban Economics: A Spiky World Book by Charles van Marrewijk, Harry Garretsen, and Steven Brakman 500
Diagnostic immunohistochemistry : theranostic and genomic applications 6th Edition 500
Chen Hansheng: China’s Last Romantic Revolutionary 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3150700
求助须知:如何正确求助?哪些是违规求助? 2802232
关于积分的说明 7846614
捐赠科研通 2459579
什么是DOI,文献DOI怎么找? 1309294
科研通“疑难数据库(出版商)”最低求助积分说明 628849
版权声明 601757