计算机科学
分割
人工智能
特征(语言学)
编码器
模式识别(心理学)
残余物
计算机视觉
块(置换群论)
算法
数学
几何学
操作系统
哲学
语言学
作者
Jiajia Ni,Wei Mu,Anqi Pan,Zhengming Chen
标识
DOI:10.1016/j.bspc.2023.105861
摘要
Automatic retinal vessel segmentation plays a crucial role in the diagnosis and assessment of various ophthalmologic diseases. Currently, the primary retinal vessel segmentation algorithms are based on the encoder-decoder structure. However, these U-Net analogs suffer from the loss of both spatial and semantic information, caused by continuous up-sampling operations in the decoder structure. In this paper, we rethink the above problem and build a novel deep neural network for retinal vessel segmentation, called FSE-Net. Specifically, to address the issue of feature information loss and enhance the performance of retinal vessel segmentation, we eliminate the decoder structure. In particular, we introduced a multi-head feature fusion block (MFF) as a substitute for the continuous up-sampling operation. Additionally, the encoder stage of FSE-Net incorporates a residual feature separable block (RFSB) to further refine and distill features, thereby enhancing the capability of feature extraction. Subsequently, we employ a residual atrous spatial feature aggregate module (RASF) to expand the network's receptive field by incorporating multi-scale feature information. We conducted experiments on five widely recognized databases for retinal vessel segmentation, namely DRIVE, CHASEDB1, STARE, IOSTAR, and LES-AV. The results demonstrate that our proposed FSE-Net outperforms state-of-the-art approaches in terms of segmentation performance. Moreover, we demonstrate the feasibility of achieving superior segmentation performance without employing the traditional U-Net analog network structure.
科研通智能强力驱动
Strongly Powered by AbleSci AI