计算机科学
人工智能
深度学习
序列(生物学)
期限(时间)
短时记忆
光学(聚焦)
模式识别(心理学)
机器学习
循环神经网络
人工神经网络
遗传学
量子力学
生物
光学
物理
作者
Xuechang Wang,Hui Lv,Jiawei Chen
标识
DOI:10.1007/978-981-99-8462-6_29
摘要
Spatiotemporal sequence prediction learning generates one or more frames of images by learning from multiple frames of historical input. Most current spatiotemporal sequence prediction learning methods do not adequately consider the importance of long-term features for spatial reconstruction. Based on the convolutional LSTM (ConvLSTM) network unit, this paper adds a memory storage unit that updates information through the original memory unit in the ConvLSTM unit and uses the same zigzag memory flow as the PredRNN network, which can focus on long-term and short-term spatiotemporal features at the same time. Then, an attention module is proposed to extract important information from the long-term hidden state and aggregate it with the short-term hidden state, expand the temporal feeling field of the hidden state, and propose the attention gate spatiotemporal LSTM (AGST-LSTM) model, which further enhances the model’s capacity to catch the spatiotemporal correlation. This paper validates the model through two different prediction tasks. The AGST-LSTM model has competitive performance compared to the comparative model to some degree, as exhibited in experiments.
科研通智能强力驱动
Strongly Powered by AbleSci AI