强化学习
弹道
计算机科学
运动学
人工智能
编码
水准点(测量)
变压器
机器学习
工程类
物理
基因
地理
化学
电压
电气工程
天文
经典力学
生物化学
大地测量学
作者
Yujun Jiao,Mingze Miao,Zhishuai Yin,Chunyuan Lei,Xu Dong Zhu,Xiaobin Zhao,Linzhen Nie,Bo Tao
标识
DOI:10.1109/tits.2024.3357479
摘要
Accurate trajectory prediction for neighboring agents is crucial for autonomous vehicles navigating complex scenes. Recent deep learning (DL) methods excel in encoding complex interactions but often generate invalid predictions due to difficulties in modeling transient and contingency interactions. This paper proposes a hierarchical hybrid framework that combines DL and reinforcement learning (RL) for multi-agent trajectory prediction, capturing multi-scale interactions that shape future motion. In the DL stage, Transformer-style graph neural network (GNN) is employed to encode heterogeneous interactions at intermediate and global scales, predicting multi-modal intentions as key future positions for agents. In the RL stage, we divide the scene into local scenes based on DL predictions. A Transformer-based Proximal Policy Optimization (PPO) model, incorporated with vehicle kinematics, generates future trajectories in the form of motion planning shaped by microscopic interactions and guided by a multi-objective reward for balanced agent-centric accuracy and scene-wise compatibility. Experimental results on the Argoverse benchmark and driver-in-loop simulations demonstrate that our framework enhances trajectory prediction feasibility and plausibility in interactive scenes.
科研通智能强力驱动
Strongly Powered by AbleSci AI