传感器融合
计算机科学
匹配(统计)
融合
人工智能
目标检测
感知
数据挖掘
跟踪(教育)
对象(语法)
雷达
计算机视觉
机器学习
模式识别(心理学)
哲学
心理学
统计
生物
神经科学
电信
语言学
数学
教育学
作者
Cheng Zhang,Hai Wang,Long Chen,Yicheng Li,Yingfeng Cai
标识
DOI:10.1109/tnnls.2023.3325527
摘要
The performance of environmental perception is critical for the safe driving of intelligent connected vehicles (ICVs). Currently, the most prevalent technical solutions are based on multimodal data fusion to achieve a comprehensive perception of the surrounding environment. However, existing fusion perception methods suffer from issues such as low sensor data utilization and unreasonable fusion strategies, which severely limit their performance in adverse weather conditions. To address these issues, this article proposes a novel multimodal data fusion framework called MixedFusion. In this framework, we introduce two innovative fusion strategies for the data characteristics of each sensor: high-level semantic guidance (HLSG) and multipriority matching (MPM). It not only realizes the efficient utilization of the multimodal data but also further realizes the complementary fusion between the multimodal data. We perform extensive experiments on the nuScenes and K-radar datasets. The experimental results demonstrate that the fusion framework proposed in this article significantly improves the performance of 3-D object detection and tracking in severe weather conditions.
科研通智能强力驱动
Strongly Powered by AbleSci AI