计算机科学
保险丝(电气)
目标检测
人工智能
分割
背景(考古学)
利用
情态动词
深度学习
传感器融合
计算机视觉
机器学习
工程类
古生物学
电气工程
生物
化学
高分子化学
计算机安全
作者
Di Feng,Christian Schütz,Lars Rosenbaum,Heinz Hertlein,Claudius Gläser,Fabian Timm,W. Wiesbeck,Klaus Dietmayer
出处
期刊:IEEE Transactions on Intelligent Transportation Systems
[Institute of Electrical and Electronics Engineers]
日期:2020-02-17
卷期号:22 (3): 1341-1360
被引量:812
标识
DOI:10.1109/tits.2020.2972974
摘要
Recent advancements in perception for autonomous driving are driven by deep learning.In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g.cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties.In this context, many methods have been proposed for deep multi-modal perception problems.However, there is no general guideline for network architecture design, and questions of "what to fuse", "when to fuse", and "how to fuse" remain open.This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving.To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research.We then summarize the fusion methodologies and discuss challenges and open questions.In the appendix, we provide tables that summarize topics and methods.We also provide an interactive online platform to navigate each reference: https://boschresearch.github.io/multimodalperception/.
科研通智能强力驱动
Strongly Powered by AbleSci AI