系列(地层学)
代表(政治)
时间序列
机器学习
计算机科学
人工智能
计量经济学
数学
政治学
地质学
古生物学
政治
法学
作者
Yuxuan Bian,Xuan Ju,Jiangtong Li,Zhijian Xu,Dawei Cheng,Qiang Xu
出处
期刊:Cornell University - arXiv
日期:2024-02-07
标识
DOI:10.48550/arxiv.2402.04852
摘要
In this study, we present aLLM4TS, an innovative framework that adapts Large Language Models (LLMs) for time-series representation learning. Central to our approach is that we reconceive time-series forecasting as a self-supervised, multi-patch prediction task, which, compared to traditional mask-and-reconstruction methods, captures temporal dynamics in patch representations more effectively. Our strategy encompasses two-stage training: (i). a causal continual pre-training phase on various time-series datasets, anchored on next patch prediction, effectively syncing LLM capabilities with the intricacies of time-series data; (ii). fine-tuning for multi-patch prediction in the targeted time-series context. A distinctive element of our framework is the patch-wise decoding layer, which departs from previous methods reliant on sequence-level decoding. Such a design directly transposes individual patches into temporal sequences, thereby significantly bolstering the model's proficiency in mastering temporal patch-based representations. aLLM4TS demonstrates superior performance in several downstream tasks, proving its effectiveness in deriving temporal representations with enhanced transferability and marking a pivotal advancement in the adaptation of LLMs for time-series analysis.
科研通智能强力驱动
Strongly Powered by AbleSci AI