计算机科学
推论
变压器
符号
图形
弹道
人工智能
算法
数据挖掘
机器学习
理论计算机科学
数学
算术
工程类
物理
电压
天文
电气工程
作者
Zhibo Wang,Jiayu Guo,Zhengming Hu,Haiqiang Zhang,Junping Zhang,Jian Pu
标识
DOI:10.1109/ojits.2023.3233952
摘要
Trajectory prediction is a crucial step in the pipeline for autonomous driving because it not only improves the planning of future routes, but also ensures vehicle safety. On the basis of deep neural networks, numerous trajectory prediction models have been proposed and have already achieved high performance on public datasets due to the well-designed model structure and complex optimization procedure. However, the majority of these methods overlook the fact that vehicles’ limited computing resources can be utilized for online real-time inference. We proposed a Lane Transformer to achieve high accuracy and efficiency in trajectory prediction to tackle this problem. On the one hand, inspired by the well-known transformer, we use attention blocks to replace the commonly used Graph Convolution Network (GCN) in trajectory prediction models, thereby drastically reducing the time cost while maintaining the accuracy. In contrast, we construct our prediction model to be compatible with TensorRT, allowing it to be further optimized and easily transformed into a deployment-friendly form of TensorRT. Experiments demonstrate that our model outperforms the baseline LaneGCN model in quantitative prediction accuracy on the Argoverse dataset by a factor of $10\times $ to $25\times $ . Our $7ms$ inference time is the fastest among all open source methods currently available. Our code is publicly available at: https://github.com/mmdzb/Lane-Transformer .
科研通智能强力驱动
Strongly Powered by AbleSci AI