多光谱图像
行人检测
传感器融合
计算机科学
行人
模态(人机交互)
融合
人工智能
变压器
计算机视觉
数据挖掘
工程类
运输工程
电气工程
电压
哲学
语言学
作者
Yinghui Xing,Shuo Yang,Song Wang,Shizhou Zhang,Guoqiang Liang,Xiuwei Zhang,Yanning Zhang
标识
DOI:10.1109/tits.2024.3450584
摘要
Multispectral pedestrian detection is an important task for many around-the-clock applications, since the visible and thermal modalities can provide complementary information especially under low light conditions. Due to the presence of two modalities, misalignment and modality imbalance are the most significant issues in multispectral pedestrian detection. In this paper, we propose MultiSpectral pedestrian DEtection TRansformer (MS-DETR) to fix above issues. MS-DETR consists of two modality-specific backbones and Transformer encoders, followed by a multi-modal Transformer decoder, and the visible and thermal features are fused in the multi-modal Transformer decoder. To well resist the misalignment between multi-modal images, we design a loosely coupled fusion strategy by sparsely sampling some keypoints from multi-modal features independently and fusing them with adaptively learned attention weights. Moreover, based on the insight that not only different modalities, but also different pedestrian instances tend to have different confidence scores to final detection, we further propose an instance-aware modality-balanced optimization strategy, which preserves visible and thermal decoder branches and aligns their predicted slots through an instance-wise dynamic loss. Our end-to-end MS-DETR shows superior performance on the challenging KAIST, CVC-14 and LLVIP benchmark datasets. The source code is available at https://github.com/YinghuiXing/MS-DETR.
科研通智能强力驱动
Strongly Powered by AbleSci AI