计算机科学
全色胶片
人工智能
可解释性
多光谱图像
卷积神经网络
深度学习
图像分辨率
网络体系结构
图像融合
模式识别(心理学)
图像(数学)
计算机安全
作者
Zhikang Xiang,Liang Xiao,Jingxiang Yang,Wenzhi Liao,Wilfried Philips
标识
DOI:10.1109/tgrs.2022.3197438
摘要
Pansharpening is an image fusion procedure, which aims to produce a high spatial resolution multispectral image by combining a low spatial resolution multispectral image and a high spatial resolution panchromatic image. The most popular and successful paradigm for pansharpening is the framework known as detail injection, while it cannot fully exploit complex and non-linear complementary features of both images. In this paper, we propose a detail injection model inspired deep fusion network for pansharpening (DIM-FuNet). Firstly, by treating pansharpening as a complicated and non-linear details learning and injection problem, we establish a unified optimizing detail-injection model with triple detail fidelity terms: 1) a band-dependent spatial detail fidelity term, 2) a local detail fidelity term and 3) a complicated details synthesis term. Secondly, the model is optimized via the iterative gradient descent and unfolded into a deep convolutional neural network. Subsequently, the unrolling network has triple branches, in which, a point-wise convolutional sub-network, a depth-wise convolutional sub-network are corresponding to the former two detail constrained terms, and an adaptive weighted reconstruction module with a fusion sub-network to aggregate details of two branches and synthesis the final complicated details. Finally, the deep unrolling network is trained in end-to-end manners. Different from traditional deep fusion networks, the architecture design of DIM-FuNet is guided by the optimizing model and thus promotes better interpretability. Experimental results on reduced and full-resolution demonstrate the effectiveness of the proposed DIM-FuNet which achieves the best performance compared with the state-of-the-art pansharpening method.
科研通智能强力驱动
Strongly Powered by AbleSci AI