计算机科学
推论
人工智能
系列(地层学)
代表(政治)
变压器
时间序列
机器学习
特征学习
编码(集合论)
简单(哲学)
模式识别(心理学)
政治
政治学
古生物学
生物
电压
集合(抽象数据类型)
程序设计语言
法学
量子力学
哲学
物理
认识论
作者
Seunghan Lee,Taeyoung Park,Kibok Lee
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:3
标识
DOI:10.48550/arxiv.2312.16427
摘要
Masked time series modeling has recently gained much attention as a self-supervised representation learning strategy for time series. Inspired by masked image modeling in computer vision, recent works first patchify and partially mask out time series, and then train Transformers to capture the dependencies between patches by predicting masked patches from unmasked patches. However, we argue that capturing such patch dependencies might not be an optimal strategy for time series representation learning; rather, learning to embed patches independently results in better time series representations. Specifically, we propose to use 1) the simple patch reconstruction task, which autoencode each patch without looking at other patches, and 2) the simple patch-wise MLP that embeds each patch independently. In addition, we introduce complementary contrastive learning to hierarchically capture adjacent time series information efficiently. Our proposed method improves time series forecasting and classification performance compared to state-of-the-art Transformer-based models, while it is more efficient in terms of the number of parameters and training/inference time. Code is available at this repository: https://github.com/seunghan96/pits.
科研通智能强力驱动
Strongly Powered by AbleSci AI