计算机科学
人工智能
情态动词
模式识别(心理学)
模态(人机交互)
背景(考古学)
分割
机器学习
串联(数学)
卷积神经网络
图像分割
特征(语言学)
生物
组合数学
哲学
古生物学
语言学
化学
高分子化学
数学
作者
Feiyi Fang,Yazhou Yao,Tao Zhou,Guo-Sen Xie,Jianfeng Lu
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2022-11-01
卷期号:26 (11): 5310-5320
被引量:45
标识
DOI:10.1109/jbhi.2021.3109301
摘要
Accurate medical image segmentation of brain tumors is necessary for the diagnosing, monitoring, and treating disease. In recent years, with the gradual emergence of multi-sequence magnetic resonance imaging (MRI), multi-modal MRI diagnosis has played an increasingly important role in the early diagnosis of brain tumors by providing complementary information for a given lesion. Different MRI modalities vary significantly in context, as well as in coarse and fine information. As the manual identification of brain tumors is very complicated, it usually requires the lengthy consultation of multiple experts. The automatic segmentation of brain tumors from MRI images can thus greatly reduce the workload of doctors and buy more time for treating patients. In this paper, we propose a multi-modal brain tumor segmentation framework that adopts the hybrid fusion of modality-specific features using a self-supervised learning strategy. The algorithm is based on a fully convolutional neural network. Firstly, we propose a multi-input architecture that learns independent features from multi-modal data, and can be adapted to different numbers of multi-modal inputs. Compared with single-modal multi-channel networks, our model provides a better feature extractor for segmentation tasks, which learns cross-modal information from multi-modal data. Secondly, we propose a new feature fusion scheme, named hybrid attentional fusion. This scheme enables the network to learn the hybrid representation of multiple features and capture the correlation information between them through an attention mechanism. Unlike popular methods, such as feature map concatenation, this scheme focuses on the complementarity between multi-modal data, which can significantly improve the segmentation results of specific regions. Thirdly, we propose a self-supervised learning strategy for brain tumor segmentation tasks. Our experimental results demonstrate the effectiveness of the proposed model against other state-of-the-art multi-modal medical segmentation methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI