计算机科学
循环神经网络
人工智能
序列学习
解耦(概率)
背景(考古学)
模块化设计
序列(生物学)
人工神经网络
深度学习
机器学习
短时记忆
操作系统
工程类
控制工程
古生物学
生物
遗传学
作者
Yunbo Wang,Haixu Wu,Jianjin Zhang,Zhifeng Gao,Jianmin Wang,Philip S. Yu,Mingsheng Long
标识
DOI:10.1109/tpami.2022.3165153
摘要
The predictive learning of spatiotemporal sequences aims to generate future images by learning from the historical context, where the visual dynamics are believed to have modular structures that can be learned with compositional subsystems. This paper models these structures by presenting PredRNN, a new recurrent network, in which a pair of memory cells are explicitly decoupled, operate in nearly independent transition manners, and finally form unified representations of the complex environment. Concretely, besides the original memory cell of LSTM, this network is featured by a zigzag memory flow that propagates in both bottom-up and top-down directions across all layers, enabling the learned visual dynamics at different levels of RNNs to communicate. It also leverages a memory decoupling loss to keep the memory cells from learning redundant features. We further propose a new curriculum learning strategy to force PredRNN to learn long-term dynamics from context frames, which can be generalized to most sequence-to-sequence models. We provide detailed ablation studies to verify the effectiveness of each component. Our approach is shown to obtain highly competitive results on five datasets for both action-free and action-conditioned predictive learning scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI