全色胶片
多光谱图像
人工智能
计算机科学
图像融合
图像分辨率
计算机视觉
融合
均方误差
模式识别(心理学)
像素
图像(数学)
数学
统计
语言学
哲学
作者
Yinghui Xing,Shuyuan Yang,Yan Zhang,Yanning Zhang
标识
DOI:10.1109/tip.2022.3215906
摘要
Recently, deep learning based multispectral (MS) and panchromatic (PAN) image fusion methods have been proposed, which extracted features automatically and hierarchically by a series of non-linear transformations to model the complicated imaging discrepancy. But they always pay more attention to the extraction and compensation of spatial details and use the mean squared error or mean absolute error as a loss function, regardless of the preservation of spectral information contained in multispectral images. For the sake of the improvements in both spatial and spectral resolution, this paper presents a novel fusion model that takes the spectral preservation into consideration, and learns the spectral cues from the process of generating a spectrally refined multispectral image, which is constrained by a spectral loss between the generated image and the reference image. Then these spectral cues are used to modulate the PAN features to obtain final fusion result. Experimental results on reduced-resolution and full-resolution datasets demonstrate that the proposed method can obtain a better fusion result in terms of visual inspection and evaluation indices when compared with current state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI