计算机科学
人工智能
分割
卷积神经网络
乳腺超声检查
编码器
模式识别(心理学)
散斑噪声
判别式
计算机视觉
图像分割
深度学习
斑点图案
乳腺癌
乳腺摄影术
医学
癌症
内科学
操作系统
作者
Qiqi He,Qiuju Yang,Minghao Xie
标识
DOI:10.1016/j.compbiomed.2023.106629
摘要
Automatic breast ultrasound image segmentation helps radiologists to improve the accuracy of breast cancer diagnosis. In recent years, the convolutional neural networks (CNNs) have achieved great success in medical image analysis. However, it exhibits limitations in modeling long-range relations, which is unfavorable for ultrasound images with speckle noise and shadows, resulting in decreased accuracy of breast lesion segmentation. Transformer can obtain sufficient global information, but it is deficient in acquiring local details and needs to be pre-trained on large-scale datasets. In this paper, we propose a Hybrid CNN-Transformer network (HCTNet) for boosting the breast lesion segmentation in ultrasound images. In the encoder of HCTNet, Transformer Encoder Blocks (TEBlocks) are designed to learn the global contextual information, which are combined with CNNs to extract features. In the decoder of HCTNet, a Spatial-wise Cross Attention (SCA) module is developed based on the spatial attention mechanism, which reduces the semantic discrepancy with the encoder. Moreover, residual connection is used between decoder blocks to make the generated features more discriminative by aggregating contextual feature maps at different semantic scales. Extensive experiments on three public breast ultrasound datasets demonstrate that HCTNet outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation.
科研通智能强力驱动
Strongly Powered by AbleSci AI