计算机科学
编码器
人工智能
代表(政治)
序列(生物学)
模式识别(心理学)
特征学习
卷积神经网络
深度学习
序列标记
机器学习
自然语言处理
任务(项目管理)
操作系统
政治
生物
经济
管理
法学
遗传学
政治学
作者
Nitish Srivastava,Elman Mansimov,Ruslan Salakhutdinov
出处
期刊:Cornell University - arXiv
日期:2015-02-16
被引量:4
标识
DOI:10.48550/arxiv.1502.04681
摘要
We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations ("percepts") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI