红外线的
特征(语言学)
分解
网(多面体)
对象(语法)
人工智能
模式识别(心理学)
计算机科学
计算机视觉
化学
数学
光学
物理
哲学
语言学
几何学
有机化学
作者
Ke Li,Di Wang,Zhangyuan Hu,Shaofeng Li,Weiping Ni,Lin Zhao,Quan Wang
出处
期刊:Cornell University - arXiv
日期:2024-12-12
标识
DOI:10.48550/arxiv.2412.09258
摘要
Infrared-visible object detection (IVOD) seeks to harness the complementary information in infrared and visible images, thereby enhancing the performance of detectors in complex environments. However, existing methods often neglect the frequency characteristics of complementary information, such as the abundant high-frequency details in visible images and the valuable low-frequency thermal information in infrared images, thus constraining detection performance. To solve this problem, we introduce a novel Frequency-Driven Feature Decomposition Network for IVOD, called FD2-Net, which effectively captures the unique frequency representations of complementary information across multimodal visual spaces. Specifically, we propose a feature decomposition encoder, wherein the high-frequency unit (HFU) utilizes discrete cosine transform to capture representative high-frequency features, while the low-frequency unit (LFU) employs dynamic receptive fields to model the multi-scale context of diverse objects. Next, we adopt a parameter-free complementary strengths strategy to enhance multimodal features through seamless inter-frequency recoupling. Furthermore, we innovatively design a multimodal reconstruction mechanism that recovers image details lost during feature extraction, further leveraging the complementary information from infrared and visible images to enhance overall representational capacity. Extensive experiments demonstrate that FD2-Net outperforms state-of-the-art (SOTA) models across various IVOD benchmarks, i.e. LLVIP (96.2% mAP), FLIR (82.9% mAP), and M3FD (83.5% mAP).
科研通智能强力驱动
Strongly Powered by AbleSci AI