计算机科学
人工智能
图像(数学)
融合
特征(语言学)
计算机视觉
图像融合
特征提取
利用
模式识别(心理学)
哲学
语言学
计算机安全
作者
Jun Luo,Wenqi Ren,Xinwei Gao,Xiaochun Cao
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:32: 1529-1540
被引量:6
标识
DOI:10.1109/tip.2023.3242824
摘要
Most multi-exposure image fusion (MEF) methods perform unidirectional alignment within limited and local regions, which ignore the effects of augmented locations and preserve deficient global features. In this work, we propose a multi-scale bidirectional alignment network via deformable self-attention to perform adaptive image fusion. The proposed network exploits differently exposed images and aligns them to the normal exposure in varying degrees. Specifically, we design a novel deformable self-attention module that considers variant long-distance attention and interaction and implements the bidirectional alignment for image fusion. To realize adaptive feature alignment, we employ a learnable weighted summation of different inputs and predict the offsets in the deformable self-attention module, which facilitates that the model generalizes well in various scenes. In addition, the multi-scale feature extraction strategy makes the features across different scales complementary and provides fine details and contextual features. Extensive experiments demonstrate that our proposed algorithm performs favorably against state-of-the-art MEF methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI