作者
Shahid Karim,Geng Tong,Jing Wang,Akeel Qadir,Umar Farooq,Yiting Yu
摘要
• The image fusion methods are comprehensively reviewed, and recent developments of DL are elaborated. • The image fusion applications are briefly discussed. • The imaging technologies are summarized for image fusion. • The spectral and polarized image fusion is broadly conferred. • Future perspectives are comprehensively discussed. Multiple imaging modalities can be combined to provide more information about the real world than a single modality alone. Infrared images discriminate targets with respect to their thermal radiation differences, and visible images are promising for texture details. On the other hand, polarized images deliver intensity and polarization information, and multispectral images dispense the spatial, spectral, and temporal information depending upon the environment. Different sensors provide images with different characteristics, such as type of degradation, important features, textural attributes, etc. Several stimulating tasks have been explored in the last decades based on algorithms, performance assessments, processing techniques, and prospective applications. However, most of the reviews and surveys have not properly addressed the issues of additional possibilities of imaging fusion. The primary goal of this paper is to give a thorough overview of image fusion approaches, including associated background and current breakthroughs. We introduce image fusion and categorize the methods based on conventional image processing, deep learning (DL) architectures, and fusion scenarios. Further, we emphasize the recent DL developments in various image fusion scenarios. However, there are still several difficulties to overcome, including developing more advanced algorithms to support more dependable and real-time practical applications, discussed in future perspectives. This study can assist researchers in coping with multiple imaging modalities, recent fusion developments, and future perspectives.