计算机科学
水准点(测量)
序列标记
循环神经网络
任务(项目管理)
编码(集合论)
人工智能
机器翻译
卷积神经网络
序列(生物学)
机器学习
自然语言处理
深度学习
程序设计语言
人工神经网络
集合(抽象数据类型)
大地测量学
地理
管理
经济
遗传学
生物
作者
Shaojie Bai,J. Zico Kolter,Vladlen Koltun
出处
期刊:Cornell University - arXiv
日期:2018-01-01
被引量:3857
标识
DOI:10.48550/arxiv.1803.01271
摘要
For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at http://github.com/locuslab/TCN .
科研通智能强力驱动
Strongly Powered by AbleSci AI