判别式
学习迁移
计算机科学
人工智能
恶性肿瘤
结核(地质)
深度学习
机器学习
肺癌
监督学习
模式识别(心理学)
放射科
医学
病理
人工神经网络
生物
古生物学
作者
Ruoyu Wu,Changyu Liang,Yuan Li,Xu Shi,Jiuquan Zhang,Hong Huang
标识
DOI:10.1016/j.eswa.2022.119339
摘要
Lung cancer is one of the most fatal malignant diseases, which poses an acute menace to human health and life. The accurate differential diagnosis of lung nodules is a vital step in the computed tomography (CT)-based noninvasive screening of lung cancer. Though deep learning-based methodologies have achieved good results in the task of nodule malignancy prediction, there are still two fundamental challenges that are required to be overcome, including insufficient labeled samples and the interferences of background tissues. Motivated by the above facts, a self-supervised transfer learning framework driven by visual attention (STLF-VA) is presented for benign–malignant identification of nodules on chest CT, which advocates using volumes containing the entire nodule objects as inputs to obtain discriminative features. Compared with traditional models that designed 2D natural image-based transfer learning models or learning from scratch 3D models, the proposed STLF-VA method can effectively alleviate the dependence on labeled samples by exploring the valuable information from 3D unlabeled CT scans in a coarse-to-fine self-supervised transfer learning fashion. Unlike the single attention mechanism, the multi-view aggregative attention (MVAA) module embedded in the STLF-VA architecture fully recalibrates the multi-layer feature maps from multiple attention angles, and can strengthen the anti-interference ability on background information. Moreover, a new dataset CQUCH-LND is constructed for evaluating the effectiveness of the STLF-VA model in clinical practice. Experimental results on the clinical dataset CQUCH-LND and the public dataset LIDC-IDRI indicate that the proposed STLF-VA framework achieves more competitive performance than some state-of-the-art nodule classification approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI