计算机科学
卷积神经网络
基线(sea)
人工智能
推论
分割
架空(工程)
构造(python库)
利用
传感器融合
计算机视觉
感知
深度学习
维数(图论)
机器学习
海洋学
计算机安全
数学
神经科学
纯数学
生物
程序设计语言
地质学
操作系统
作者
Qingliang Liu,Shuai Zhou
出处
期刊:IEEE Transactions on Circuits and Systems Ii-express Briefs
[Institute of Electrical and Electronics Engineers]
日期:2024-04-02
卷期号:71 (9): 4296-4300
被引量:1
标识
DOI:10.1109/tcsii.2024.3384419
摘要
Autonomous driving demands both accurate perception and high-speed decision making. Therefore, automated vehicles are typically equipped with multiple sensors such as cameras and LiDARs, and the data fetched from these different sensors are always fused for enabling the vehicles to exploit complementary environmental contexts to obtain better perception accuracy in free road segmentation. The fusion between multiple Deep Convolutional Neural Networks (DCNNs) has already been proved to be a promising solution to delivering impressive perception performance. However, the previous methods tend to conduct fusion by utilizing computationally intensive complex DCNNs, which results in very long inference time. To tackle this issue, we propose a framework, named LightFusion, to develop one lightweight and accurate CNN architecture (LA-RoadNet) for enabling efficient fusion in free road segmentation. Firstly, we construct dual-dimension shallow DCNNs (DDS-DCNNs) for LA-RoadNet by maintaining the same number of fusion stages in the baseline model and further cutting down the original number of basic blocks in each fusion stage of this baseline model, thus greatly reducing the computational overhead. Then, to gain high perception accuracy, we introduce a joint unbalanced loss for guiding our LA-RoadNet to both mimic the baseline model's structured information and learn from original ground-truth labels together. Evaluations demonstrate that the LA-RoadNet obtained from our LightFusion framework can achieve higher accuracy on KITTI dataset while gaining up to 5.2× and 5.3× reductions of the MACs and parameters respectively compared to the state-of-the-art work, and obtaining as high as 4.8× speedup compared with baseline model.
科研通智能强力驱动
Strongly Powered by AbleSci AI