人工智能
计算机科学
计算机视觉
图像分割
图像融合
分割
融合
图像(数学)
模式识别(心理学)
语言学
哲学
作者
Yingjie Chen,Lizhen Cui,Hao Wu
标识
DOI:10.1109/medai59581.2023.00065
摘要
In recent years, significant progress has been achieved in medical image segmentation by leveraging deep neural networks based on the U-Net architecture and skip connections. While widely used in medical image segmentation, convolutional neural networks (CNNs) have limitations in learning global semantic information due to their localized convolutional operations. Additionally, fusion methods like element-wise addition or concatenation in encoder-decoder architectures often introduce unnecessary information, leading to the loss of local details. To address these challenges, we introduce FusNet, an innovative information fusion network. FusNet combines features from Swin Transformer and Res2Net, enhancing global dependency relationships and low-level spatial details. A fundamental subtraction unit (DIV) is used to eliminate redundant information in each layer, reducing high-level up-sampling-induced redundancy. Finally, FusNet aggregates features from various layers to produce segmentation results in the decoder. Evaluation on four diverse medical image datasets, including polyps, eye diseases, cell nuclei, and breast cancer, demonstrates FusNet's remarkable segmentation performance and robustness, surpassing alternative methods. FusNet holds significant potential for improving medical image segmentation tasks. The source code of FusNet is freely available at https://github.com/HaoWuLab-Bioinformatics/FusNet
科研通智能强力驱动
Strongly Powered by AbleSci AI