计算机科学
多模态
变压器
人工智能
机器学习
建筑
运动(物理)
特征(语言学)
工程类
电压
语言学
电气工程
万维网
哲学
艺术
视觉艺术
作者
Yicheng Liu,Jinghuai Zhang,Liangji Fang,Qinhong Jiang,Bolei Zhou
标识
DOI:10.1109/cvpr46437.2021.00749
摘要
Predicting multiple plausible future trajectories of the nearby vehicles is crucial for the safety of autonomous driving. Recent motion prediction approaches attempt to achieve such multimodal motion prediction by implicitly regularizing the feature or explicitly generating multiple candidate proposals. However, it remains challenging since the latent features may concentrate on the most frequent mode of the data while the proposal-based methods depend largely on the prior knowledge to generate and select the proposals. In this work, we propose a novel transformer framework for multimodal motion prediction, termed as mmTransformer. A novel network architecture based on stacked transformers is designed to model the multimodality at feature level with a set of fixed independent proposals. A region-based training strategy is then developed to induce the multimodality of the generated proposals. Experiments on Argoverse dataset show that the proposed model achieves the state-of-the-art performance on motion prediction, substantially improving the diversity and the accuracy of the predicted trajectories. Demo video and code are available at https://decisionforce.github.io/mmTransformer.
科研通智能强力驱动
Strongly Powered by AbleSci AI