计算机科学
分割
人工智能
体素
背景(考古学)
深度学习
模式识别(心理学)
特征(语言学)
小波
模式
编码器
语言学
哲学
古生物学
社会科学
社会学
生物
操作系统
作者
Yuheng Pan,Haohan Yong,Weijia Lu,Guoyan Li,Jia Cong
摘要
Abstract Background and objective Accurate segmentation of brain tumors from multimodal magnetic resonance imaging (MRI) holds significant importance in clinical diagnosis and surgical intervention, while current deep learning methods cope with situations of multimodal MRI by an early fusion strategy that implicitly assumes that the modal relationships are linear, which tends to ignore the complementary information between modalities, negatively impacting the model's performance. Meanwhile, long‐range relationships between voxels cannot be captured due to the localized character of the convolution procedure. Method Aiming at this problem, we propose a multimodal segmentation network based on a late fusion strategy that employs multiple encoders and a decoder for the segmentation of brain tumors. Each encoder is specialized for processing distinct modalities. Notably, our framework includes a feature fusion module based on a 3D discrete wavelet transform aimed at extracting complementary features among the encoders. Additionally, a 3D global context‐aware module was introduced to capture the long‐range dependencies of tumor voxels at a high level of features. The decoder combines fused and global features to enhance the network's segmentation performance. Result Our proposed model is experimented on the publicly available BraTS2018 and BraTS2021 datasets. The experimental results show competitiveness with state‐of‐the‐art methods. Conclusion The results demonstrate that our approach applies a novel concept for multimodal fusion within deep neural networks and delivers more accurate and promising brain tumor segmentation, with the potential to assist physicians in diagnosis.
科研通智能强力驱动
Strongly Powered by AbleSci AI