计算机科学
概率逻辑
人工智能
强化学习
无监督学习
代表(政治)
预测编码
自然语言处理
编码(社会科学)
特征学习
机器学习
数学
政治学
政治
统计
法学
作者
Aäron van den Oord,Yazhe Li,Oriol Vinyals
出处
期刊:Cornell University - arXiv
日期:2018-07-10
被引量:1569
摘要
While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI