人工智能
计算机科学
卷积神经网络
特征提取
人工神经网络
深度学习
超声波
模式识别(心理学)
乳腺超声检查
乳腺癌
乳腺摄影术
癌症
放射科
医学
内科学
作者
Xiaolei Qu,Hongyan Lu,Wenzhong Tang,Shuai Wang,Dezhi Zheng,Yaxin Hou,Jue Jiang
摘要
Breast cancer is the most commonly occurring cancer worldwide. The ultrasound reflectivity imaging technique can be used to obtain breast ultrasound (BUS) images, which can be used to classify benign and malignant tumors. However, the classification is subjective and dependent on the experience and skill of operators and doctors. The automatic classification method can assist doctors and improve the objectivity, but current convolution neural network (CNN) is not good at learning global features and vision transformer (ViT) is not good at extraction local features. In this study, we proposed a visual geometry group attention ViT (VGGA-ViT) network to overcome their disadvantages.In the proposed method, we used a CNN module to extract the local features and employed a ViT module to learn the global relationship among different regions and enhance the relevant local features. The CNN module was named the VGGA module. It was composed of a VGG backbone, a feature extraction fully connected layer, and a squeeze-and-excitation block. Both the VGG backbone and the ViT module were pretrained on the ImageNet dataset and retrained using BUS samples in this study. Two BUS datasets were employed for validation.Cross-validation was conducted on two BUS datasets. For the Dataset A, the proposed VGGA-ViT network achieved high accuracy (88.71 ±$\ \pm \ $ 1.55%), recall (90.73 ±$\ \pm \ $ 1.57%), specificity (85.58 ±$\ \pm \ $ 3.35%), precision (90.77 ±$\ \pm \ $ 1.98%), F1 score (90.73 ±$\ \pm \ $ 1.24%), and Matthews correlation coefficient (MCC) (76.34 ±7$\ \pm \ 7$ 3.29%), which were better than those of all compared previous networks in this study. The Dataset B was used as a separate test set, the test results showed that the VGGA-ViT had highest accuracy (81.72 ±$\ \pm \ $ 2.99%), recall (64.45 ±$\ \pm \ $ 2.96%), specificity (90.28 ±$\ \pm \ $ 3.51%), precision (77.08 ±$\ \pm \ $ 7.21%), F1 score (70.11 ±$\ \pm \ $ 4.25%), and MCC (57.64 ±$\ \pm \ $ 6.88%).In this study, we proposed the VGGA-ViT for the BUS classification, which was good at learning both local and global features. The proposed network achieved higher accuracy than the compared previous methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI