行人
计算机科学
弹道
编码器
机制(生物学)
人工智能
卷积码
解码方法
算法
工程类
运输工程
物理
天文
操作系统
量子力学
作者
Kai Chen,Xiao Song,Haitao Yuan,Xiaoxiang Ren
出处
期刊:IEEE Transactions on Intelligent Transportation Systems
[Institute of Electrical and Electronics Engineers]
日期:2022-05-04
卷期号:23 (11): 20046-20060
被引量:10
标识
DOI:10.1109/tits.2022.3170874
摘要
Pedestrian trajectory prediction using video is essential for many practical traffic applications. Most existing pedestrian trajectory prediction methods are based on fully connected long short-term memory (LSTM) networks and perform well on public datasets. However, these methods still have three defects: a) Most of them rely on manual annotations to obtain information about the environment surrounding the subject pedestrian, which limits practical applications; b) The interaction among pedestrians and obstacles in a scene is little studied, which leads to greater prediction error; c) Traditional LSTM methods are based on the previous moment and ignore the correlation between the future and distant past states of the pedestrian, which generates unrealistic trajectories. To tackle these problems, first, in the stage of data processing, we use an image semantic segmentation algorithm to obtain multi-category obstacle information and design an end-to-end "Siamese Position Extraction" model to obtain more accurate pedestrian interaction data. Second, we design an end-to-end fully convolutional LSTM encoder-decoder with an attention mechanism (FLEAM) to overcome the shortcomings of LSTM. Third, we compare FLEAM with several state-of-the-art LSTM-based prediction methods on multiple video sequences in the datasets ETH, UCY and MOT20. The results show that our approach generates the same prediction error as the best results of the state-of-the-art method. However, FLEAM has more potential for practice application because it does not rely on manually annotated data. We further validate the effectiveness of FLEAM by employing manually annotated data, finding that it generates much less prediction error.
科研通智能强力驱动
Strongly Powered by AbleSci AI