行人
计算机科学
弹道
背景(考古学)
人工智能
生成语法
机器学习
对抗制
物理
天文
古生物学
运输工程
工程类
生物
作者
Xin Yang,Shiyu Wang,Yitian Zhu,Dake Zhou,Tao Li
标识
DOI:10.1016/j.ins.2024.120433
摘要
Pedestrian behavior and trajectory prediction in highly dynamic and interactive scenes have emerged as among the most daunting challenges in the realm of autonomous driving. In addressing the modeling of pedestrian interaction and the generation of multimodal trajectories for pedestrian trajectory prediction, we present a novel approach: a context-based conditional variational generative adversarial network (Context-CVGN). This network is capable of capturing the physical environment, pedestrian interactions, and other scene elements by representing them as a bird's-eye view (BEV) semantic map. It can then infer various potential pedestrian trajectories in the future. By training and evaluating our model on the ETH&UCY dataset, we demonstrate superior performance compared to several state-of-the-art methods, particularly in terms of the final displacement error (FDE). These results substantiate the efficacy of our model in accurately predicting future pedestrian trajectories.
科研通智能强力驱动
Strongly Powered by AbleSci AI