分割
计算机科学
人工智能
深度学习
背景(考古学)
编码器
块(置换群论)
人工神经网络
模式识别(心理学)
计算机视觉
数学
几何学
生物
操作系统
古生物学
作者
Baiying Lei,Shan Huang,Hang Li,Ran Li,Cheng Bian,Yi‐Hong Chou,Jing Qin,Peng Zhou,Xuehao Gong,Jie‐Zhi Cheng
标识
DOI:10.1016/j.media.2020.101753
摘要
The automated whole breast ultrasound (AWBUS) is a new breast imaging technique that can depict the whole breast anatomy. To facilitate the reading of AWBUS images and support the breast density estimation, an automatic breast anatomy segmentation method for AWBUS images is proposed in this study. The problem at hand is quite challenging as it needs to address issues of low image quality, ill-defined boundary, large anatomical variation, etc. To address these issues, a new deep learning encoder-decoder segmentation method based on a self-co-attention mechanism is developed. The self-attention mechanism is comprised of spatial and channel attention module (SC) and embedded in the ResNeXt (i.e., Res-SC) block in the encoder path. A non-local context block (NCB) is further incorporated to augment the learning of high-level contextual cues. The decoder path of the proposed method is equipped with the weighted up-sampling block (WUB) to attain class-specific better up-sampling effect. Meanwhile, the co-attention mechanism is also developed to improve the segmentation coherence among two consecutive slices. Extensive experiments are conducted with comparison to several the state-of-the-art deep learning segmentation methods. The experimental results corroborate the effectiveness of the proposed method on the difficult breast anatomy segmentation problem on AWBUS images.
科研通智能强力驱动
Strongly Powered by AbleSci AI