计算机科学
变压器
嵌入
单变量
分割
计算
人工智能
机器学习
二次增长
多元统计
模式识别(心理学)
算法
工程类
电压
电气工程
作者
Yuqi Nie,Nam Hoai Nguyen,Phanwadee Sinthong,Jayant Kalagnanam
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:86
标识
DOI:10.48550/arxiv.2211.14730
摘要
We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-trained representation on one dataset to others also produces SOTA forecasting accuracy. Code is available at: https://github.com/yuqinie98/PatchTST.
科研通智能强力驱动
Strongly Powered by AbleSci AI