计算机科学
分割
人工智能
图像分割
图像分辨率
模式识别(心理学)
图像(数学)
计算机视觉
遥感
地质学
作者
Renchu Guan,Mingming Wang,Lorenzo Bruzzone,Haishi Zhao,Chen Yang
标识
DOI:10.1109/tgrs.2023.3272614
摘要
Semantic segmentation is one of the most challenging tasks for very high resolution (VHR) remote sensing applications. Deep convolutional neural networks (CNN) based on the attention mechanism have shown outstanding performance in VHR remote sensing images semantic segmentation. However, existing attention-guided methods require the estimation of a large number of parameters that are affected by the limited number of available labeled samples that results in underperforming segmentation results. In this paper, we propose a multi-scale feature fusion lightweight model (MSFFL) to greatly reduce the number of parameters and improve the accuracy of semantic segmentation. In this model, two parallel enhanced attention modules, i.e., the spatial attention module (SAM) and the channel attention module (CAM) are designed by introducing encoding position information. Then a covariance calculation strategy is adopted to recalibrate the generated attention maps. The integration of enhanced attention modules into the proposed lightweight module results in an efficient lightweight attention network (LiANet). The performance of the proposed LiANet is assessed on two benchmark datasets. Experimental results demonstrate that LiANet can achieve promising performance with a small number of parameters.
科研通智能强力驱动
Strongly Powered by AbleSci AI