计算机科学
多光谱图像
行人检测
模态(人机交互)
人工智能
特征(语言学)
模式
RGB颜色模型
利用
计算机视觉
行人
计算机安全
社会学
哲学
工程类
语言学
社会科学
运输工程
作者
Lu Zhang,Zhiyong Liu,Di Xie,Xu Yang,Hong Qiao,Kaizhu Huang,Amir Hussain
标识
DOI:10.1016/j.inffus.2018.09.015
摘要
Multispectral pedestrian detection is an emerging solution with great promise in many around-the-clock applications, such as automotive driving and security surveillance. To exploit the complementary nature and remedy contradictory appearance between modalities, in this paper, we propose a novel cross-modality interactive attention network that takes full advantage of the interactive properties of multispectral input sources. Specifically, we first utilize the color (RGB) and thermal streams to build up two detached feature hierarchy for each modality, then by taking the global features, correlations between two modalities are encoded in the attention module. Next, the channel responses of halfway feature maps are recalibrated adaptively for subsequent fusion operation. Our architecture is constructed in the multi-scale format to better deal with different scales of pedestrians, and the whole network is trained in an end-to-end way. The proposed method is extensively evaluated on the challenging KAIST multispectral pedestrian dataset and achieves state-of-the-art performance with high efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI