变压器
计算机科学
系列(地层学)
可靠性工程
工程类
电气工程
电压
古生物学
生物
作者
Sabeen Ahmed,Ian E. Nielsen,Aakash Tripathi,Shamoon Ahmad Siddiqui,Ravi P. Ramachandran,Ghulam Rasool
标识
DOI:10.1007/s00034-023-02454-8
摘要
Transformer architecture has widespread applications, particularly in Natural Language Processing and computer vision. Recently Transformers have been employed in various aspects of time-series analysis. This tutorial provides an overview of the Transformer architecture, its applications, and a collection of examples from recent research papers in time-series analysis. We delve into an explanation of the core components of the Transformer, including the self-attention mechanism, positional encoding, multi-head, and encoder/decoder. Several enhancements to the initial, Transformer architecture are highlighted to tackle time-series tasks. The tutorial also provides best practices and techniques to overcome the challenge of effectively training Transformers for time-series analysis.
科研通智能强力驱动
Strongly Powered by AbleSci AI