计算机科学
图像(数学)
降级(电信)
融合
自然语言处理
情报检索
人工智能
人机交互
多媒体
语言学
电信
哲学
作者
Xunpeng Yi,Han Xu,Hao Zhang,Linfeng Tang,Jiayi Ma
出处
期刊:Cornell University - arXiv
日期:2024-03-24
标识
DOI:10.48550/arxiv.2403.16387
摘要
Image fusion aims to combine information from different source images to create a comprehensively representative image. Existing fusion methods are typically helpless in dealing with degradations in low-quality source images and non-interactive to multiple subjective and objective needs. To solve them, we introduce a novel approach that leverages semantic text guidance image fusion model for degradation-aware and interactive image fusion task, termed as Text-IF. It innovatively extends the classical image fusion to the text guided image fusion along with the ability to harmoniously address the degradation and interaction issues during fusion. Through the text semantic encoder and semantic interaction fusion decoder, Text-IF is accessible to the all-in-one infrared and visible image degradation-aware processing and the interactive flexible fusion outcomes. In this way, Text-IF achieves not only multi-modal image fusion, but also multi-modal information fusion. Extensive experiments prove that our proposed text guided image fusion strategy has obvious advantages over SOTA methods in the image fusion performance and degradation treatment. The code is available at https://github.com/XunpengYi/Text-IF.
科研通智能强力驱动
Strongly Powered by AbleSci AI