激光雷达
失败
计算机科学
修剪
点云
人工智能
目标检测
还原(数学)
特征提取
编码器
深度学习
特征(语言学)
卷积神经网络
推论
计算
计算机视觉
模式识别(心理学)
算法
遥感
数学
生物
几何学
农学
操作系统
地质学
语言学
哲学
并行计算
作者
Manoj Rohit Vemparala,Anupinder Singh,Ahmed Mzid,Nael Fasfous,Alexander Frickenstein,Florain Mirus,Hans-Joerg Voegel,Naveen Shankar Nagaraja,Walter Stechele
标识
DOI:10.1109/ivworkshops54471.2021.9669256
摘要
Deep neural networks provide high accuracy for perception. However they require high computational power. In particular, LiDAR-based object detection delivers good accuracy and real-time performance, but demands high computation due to expensive feature-extraction from point cloud data in the encoder and backbone networks. We investigate the model complexity versus accuracy trade-off using reinforcement learning based pruning for PointPillars, a recent LiDAR-based 3D object detection network. We evaluate the model on the validation dataset of KITTI (80/20-splits) according to the mean average precision (mAP) for the car class. We prune the original PointPillars model (mAP 89.84) and achieve 65.8% reduction in floating point operations (FLOPs) for a marginal accuracy loss. The compression corresponds to 31.7% reduction in inference time and 35% reduction in GPU memory on GTX 1080 Ti.
科研通智能强力驱动
Strongly Powered by AbleSci AI