计算机科学
人工智能
稳健性(进化)
计算机视觉
像素
影子(心理学)
编码器
图像分辨率
目标检测
分割
遥感
地理
基因
操作系统
心理学
生物化学
化学
心理治疗师
作者
Qiqi Zhu,Yang Yang,Xiaoliang Sun,Minyi Guo
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:60: 1-15
被引量:6
标识
DOI:10.1109/tgrs.2022.3143886
摘要
Shadow detection automatically marks shadow pixels in high-spatial-resolution (HSR) imagery with specific categories based on meaningful colorific features. Accurate shadow mapping is crucial in interpreting images and recovering radiometric information. Recent studies have demonstrated the superiority of deep learning in very-high-resolution satellite imagery shadow detection. Previous methods usually overlap convolutional layers but cause the loss of spatial information. In addition, the scale and shape of shadows vary, and the small and irregular shadows are challenging to detect. In addition, the unbalanced distribution of the foreground and the background causes the common binary cross-entropy loss function to be biased, which seriously affects model training. A contextual detail-aware network (CDANet), a novel framework for extracting accurate and complete shadows, is proposed for shadow detection to remedy these issues. In CDANet, a double branch module is embedded in the encoder–decoder structure to effectively alleviate low-level local information loss during convolution. The contextual semantic fusion connection with the residual dilation module is proposed to provide multiscale contextual information of diverse shadows. A hybrid loss function is designed to retain the detailed information of the tiny shadows, which per-pixel calculates the distribution of shadows and improves the robustness of the model. The performance of the proposed method is validated on two distinct shadow detection datasets, and the proposed CDANet reveals higher portability and robustness than other methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI