分割
计算机科学
人工智能
模态(人机交互)
Sørensen–骰子系数
超声造影
模式识别(心理学)
边距(机器学习)
图像分割
计算机视觉
机器学习
超声波
放射科
医学
作者
Xiaozheng Xie,Chen Chen,Xuefeng Liu,Yong Wang,Rui Wang,Jianwei Niu
标识
DOI:10.1109/bibm58861.2023.10386035
摘要
In the past decade, significant advancements have been made in utilizing deep learning for breast lesion segmentation. Recently, researchers have increasingly focused on harnessing the power of multiple modalities, recognizing its potential for enhancing segmentation performance. We observe that in clinical practice, many radiologists often rely on two types of ultrasound images, namely ultrasound (US) and contrast-enhanced ultrasound (CEUS) data for diagnosis. This motivates us to propose a multi-modal segmentation network, called as IMAN (Iterative Mutual-Aid Network), based on these two modalities. The architecture of IMAN adopts a novel hourglass shape, featuring two branches connected by an 'X' pathway. One branch is dedicated to processing CEUS data, while the other branch handles US data. Each branch generates segmentation results specific to its respective modality. The 'X' pathway, realized by a margin mask generator module, serves as a bridge between these branches by forcing the segmentation results from one branch as additional input to the other. This head-to-tail pathway effectively facilitates mutual aid between the two modalities. In addition, we propose an iterative training policy during the training process to fully exploit the information from both US and CEUS data. Experimental results on a Breast-US-CEUS dataset comprising 169 samples demonstrate the effectiveness of IMAN, achieving Dice Similarity Coefficient of 83.96% and 81.16% for US images and CEUS videos, respectively. These scores surpass those obtained by many state-of-the-art segmentation methods. Furthermore, IMAN exhibits robust generalization capabilities across different segmentation structures.
科研通智能强力驱动
Strongly Powered by AbleSci AI