分割
计算机科学
乳腺超声检查
变压器
超声波
病变
人工智能
计算机视觉
三维超声
放射科
模式识别(心理学)
医学
乳腺癌
乳腺摄影术
内科学
外科
物理
癌症
电压
量子力学
作者
Xiaolei Qu,Jiale Zhou,Jue Jiang,Wenhan Wang,Haoran Wang,Shuai Wang,Wenzhong Tang,Xun Lin
标识
DOI:10.1016/j.inffus.2024.102430
摘要
Breast lesion segmentation of ultrasound images plays a crucial role in early screening and diagnosis of breast lesions. However, accurately segmenting lesions in breast ultrasound (BUS) images is challenging due to prevalent issues such as low contrast, intense speckle noise, and blurred lesion boundaries. Although existing deep learning-based segmentation models have made significant progress, few have strategically addressed these complex and noisy regional features in BUS images. The easy-to-hard manner in Curriculum Learning (CL) appears promising, but it often remains at the sample level and does not adequately address regional complexities. To address this, we design a region-wise CL to dynamically adjust the focus on hard regional features in BUS images. Specifically, we propose a Regional Easy-Hard-Aware Transformer (EH-Former), structured in two stages for lesion segmentation in BUS images. The first stage incorporates uncertainty estimation for dividing regional difficulty. In the second stage, we propose a novel Adaptive Easy-Hard region Separator (AdaSep), a module employing uncertainty-aware regularization to separate features of varying difficulties, allowing the two streams within EH-Former to focus on learning regional features of different complexities. Additionally, we develop a Dynamic Easy-Hard Feature Fusion (D-Fusion) module, dynamically adjusting the fusion weight of easy and hard regional features based on the current training epoch to achieve progressive regional feature learning. Extensive experimental results on five public datasets show that the proposed EH-Former consistently outperforms state-of-the-art methods in most metrics and exhibits better domain generalization capabilities. Furthermore, our region-wise CL significantly enhances the performance of EH-Former in detecting complex tissue structures and noisy areas that are challenging to segment accurately. The source code is available at https://github.com/lele0109/EH-Former.
科研通智能强力驱动
Strongly Powered by AbleSci AI