可解释性
计算机科学
卷积神经网络
人工智能
分割
模式识别(心理学)
深度学习
可视化
判别式
机器学习
特征(语言学)
特征选择
语言学
哲学
作者
Jun Wang,Yuan Cheng,Can Han,Yaofeng Wen,Hongbing Lu,Chen Liu,Yunlang She,Jiajun Deng,Biao Li,Dahong Qian,Chen Chang
摘要
Feature maps created from deep convolutional neural networks (DCNNs) have been widely used for visual explanation of DCNN-based classification tasks. However, many clinical applications such as benign-malignant classification of lung nodules normally require quantitative and objective interpretability, rather than just visualization. In this paper, we propose a novel interpretable multi-task attention learning network named IMAL-Net for early invasive adenocarcinoma screening in chest computed tomography images, which takes advantage of segmentation prior to assist interpretable classification.Two sub-ResNets are firstly integrated together via a prior-attention mechanism for simultaneous nodule segmentation and invasiveness classification. Then, numerous radiomic features from the segmentation results are concatenated with high-level semantic features from the classification subnetwork by FC layers to achieve superior performance. Meanwhile, an end-to-end feature selection mechanism (named FSM) is designed to quantify crucial radiomic features greatly affecting the prediction of each sample, and thus it can provide clinically applicable interpretability to the prediction result.Nodule samples from a total of 1626 patients were collected from two grade-A hospitals for large-scale verification. Five-fold cross validation demonstrated that the proposed IMAL-Net can achieve an AUC score of 93.8% ± 1.1% and a recall score of 93.8% ± 2.8% for identification of invasive lung adenocarcinoma.It can be concluded that fusing semantic features and radiomic features can achieve obvious improvements in the invasiveness classification task. Moreover, by learning more fine-grained semantic features and highlighting the most important radiomics features, the proposed attention and FSM mechanisms not only can further improve the performance but also can be used for both visual explanations and objective analysis of the classification results.
科研通智能强力驱动
Strongly Powered by AbleSci AI