可解释性
人工智能
计算机科学
机器学习
分类器(UML)
深度学习
互惠的
分割
监督学习
上下文图像分类
半监督学习
模式识别(心理学)
人工神经网络
图像(数学)
哲学
语言学
作者
Chong Wang,Yuanhong Chen,Fengbei Liu,Michael S. Elliott,Chun Fung Kwok,Carlos A. Peña‐Solórzano,Helen Frazer,Davis J. McCarthy,Gustavo Carneiro
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:43 (1): 392-404
被引量:4
标识
DOI:10.1109/tmi.2023.3306781
摘要
The deployment of automated deep-learning classifiers in clinical practice has the potential to streamline the diagnosis process and improve the diagnosis accuracy, but the acceptance of those classifiers relies on both their accuracy and interpretability. In general, accurate deep-learning classifiers provide little model interpretability, while interpretable models do not have competitive classification accuracy. In this paper, we introduce a new deep-learning diagnosis framework, called InterNRL, that is designed to be highly accurate and interpretable. InterNRL consists of a student-teacher framework, where the student model is an interpretable prototype-based classifier (ProtoPNet) and the teacher is an accurate global image classifier (GlobalNet). The two classifiers are mutually optimised with a novel reciprocal learning paradigm in which the student ProtoPNet learns from optimal pseudo labels produced by the teacher GlobalNet, while GlobalNet learns from ProtoPNet's classification performance and pseudo labels. This reciprocal learning paradigm enables InterNRL to be flexibly optimised under both fully- and semi-supervised learning scenarios, reaching state-of-the-art classification performance in both scenarios for the tasks of breast cancer and retinal disease diagnosis. Moreover, relying on weakly-labelled training images, InterNRL also achieves superior breast cancer localisation and brain tumour segmentation results than other competing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI