传感器融合
感知
稳健性(进化)
计算机科学
融合
雷达
人工智能
激光雷达
过程(计算)
电信
地理
心理学
遥感
操作系统
基因
神经科学
哲学
语言学
化学
生物化学
作者
Chao Xiang,Feng Chen,Xiaopo Xie,Botian Shi,Hao Lu,Yisheng Lv,Mingchuan Yang,Zhendong Niu
出处
期刊:IEEE Intelligent Transportation Systems Magazine
[Institute of Electrical and Electronics Engineers]
日期:2023-08-04
卷期号:15 (5): 36-58
被引量:18
标识
DOI:10.1109/mits.2023.3283864
摘要
Autonomous driving (AD), including single-vehicle intelligent AD and vehicle–infrastructure cooperative AD, has become a current research hot spot in academia and industry, and multi-sensor fusion is a fundamental task for AD system perception. However, the multi-sensor fusion process faces the problem of differences in the type and dimensionality of sensory data acquired using different sensors (cameras, lidar, millimeter-wave radar, and so on) as well as differences in the performance of environmental perception caused by using different fusion strategies. In this article, we study multiple papers on multi-sensor fusion in the field of AD and address the problem that the category division in current multi-sensor fusion perception is not detailed and clear enough and is more subjective, which makes the classification strategies differ significantly among similar algorithms. We innovatively propose a multi-sensor fusion taxonomy, which divides the fusion perception classification strategies into two categories—symmetric fusion and asymmetric fusion—and seven subcategories of strategy combinations, such as data, features, and results. In addition, the reliability of current AD perception is limited by its insufficient environment perception capability and the robustness of data-driven methods in dealing with extreme situations (e.g., blind areas). This article also summarizes the innovative applications of multi-sensor fusion classification strategies in AD cooperative perception.
科研通智能强力驱动
Strongly Powered by AbleSci AI