可解释性
结果(博弈论)
人工智能
医学
黑匣子
阅读(过程)
过程(计算)
机器学习
计算机科学
政治学
数学
操作系统
数理经济学
法学
作者
Aaron I F Poon,Joseph J.�Y. Sung
摘要
Abstract One of the biggest challenges of utilizing artificial intelligence (AI) in medicine is that physicians are reluctant to trust and adopt something that they do not fully understand and regarded as a “black box.” Machine Learning (ML) can assist in reading radiological, endoscopic and histological pictures, suggesting diagnosis and predict disease outcome, and even recommending therapy and surgical decisions. However, clinical adoption of these AI tools has been slow because of a lack of trust. Besides clinician's doubt, patients lacking confidence with AI‐powered technologies also hamper development. While they may accept the reality that human errors can occur, little tolerance of machine error is anticipated. In order to implement AI medicine successfully, interpretability of ML algorithm needs to improve. Opening the black box in AI medicine needs to take a stepwise approach. Small steps of biological explanation and clinical experience in ML algorithm can help to build trust and acceptance. AI software developers will have to clearly demonstrate that when the ML technologies are integrated into the clinical decision‐making process, they can actually help to improve clinical outcome. Enhancing interpretability of ML algorithm is a crucial step in adopting AI in medicine.
科研通智能强力驱动
Strongly Powered by AbleSci AI