可解释性
青光眼
人工智能
计算机科学
深度学习
分割
机器学习
医学诊断
对比度(视觉)
模式识别(心理学)
医学
放射科
眼科
作者
Wangmin Liao,Beiji Zou,Rongchang Zhao,Yuanqiong Chen,Zhiyou He,Mengjie Zhou
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2020-05-01
卷期号:24 (5): 1405-1412
被引量:98
标识
DOI:10.1109/jbhi.2019.2949075
摘要
Despite the potential to revolutionise disease diagnosis by performing data-driven classification, clinical interpretability of ConvNet remains challenging. In this paper, a novel clinical interpretable ConvNet architecture is proposed not only for accurate glaucoma diagnosis but also for the more transparent interpretation by highlighting the distinct regions recognised by the network. To the best of our knowledge, this is the first work of providing the interpretable diagnosis of glaucoma with the popular deep learning model. We propose a novel scheme for aggregating features from different scales to promote the performance of glaucoma diagnosis, which we refer to as M-LAP. Moreover, by modelling the correspondence from binary diagnosis information to the spatial pixels, the proposed scheme generates glaucoma activations, which bridge the gap between global semantical diagnosis and precise location. In contrast to previous works, it can discover the distinguish local regions in fundus images as evidence for clinical interpretable glaucoma diagnosis. Experimental results, performed on the challenging ORIGA datasets, show that our method on glaucoma diagnosis outperforms state-of-the-art methods with the highest AUC (0.88). Remarkably, the extensive results, optic disc segmentation (dice of 0.9) and local disease focus localization based on the evidence map, demonstrate the effectiveness of our methods on clinical interpretability.
科研通智能强力驱动
Strongly Powered by AbleSci AI