插补(统计学)
缺少数据
计算机科学
强化学习
人工智能
时间序列
回归
机器学习
数据挖掘
系列(地层学)
统计
数学
生物
古生物学
作者
Philip B. Weerakody,Kok Wai Wong,Guanjin Wang
标识
DOI:10.1016/j.asoc.2023.110314
摘要
Time series modelling has been successfully handled by Long Short-Term Memory (LSTM) models. Yet their performance can be severely inhibited by the occurrence of missing values prevalent in many real-life datasets. Many previous studies have been dedicated to imputation methods for generating a complete time series sequence, which have their limitations in terms of imputation bias and inaccuracies. In this paper, we propose a new LSTM model incorporating policy gradient (PG) based reinforcement learning called PG-LSTM, which can mitigate the effect of missing data and capture time-based input feature patterns more effectively to improve prediction performance. Inspired by numerous sequence models' successes in improving the efficiency of processing language data by skipping irrelevant tokens, the PG-LSTM introduces dynamic skip connections between LSTM cell states for time series data for classification and regression tasks for the first time. Specifically, the proposed model comprises a modified LSTM cell architecture that can internally call a policy-based reinforcement learning agent to generate a skipping action, allowing the model to dynamically select the optimal subset of hidden and cell states from past states to capture periodic and non-periodic patterns within a time series sequence. Moreover, the PG-LSTM also designs a lightweight imputation layer using a simple missing value imputation strategy while incorporating missing indicators and skipping segments of unimportant data to reduce the limitations associated with imputed data for handling missing values. Our experimental results on regression and classification tasks on time series data with high rates of missing values demonstrate that the PG-LSTM improves performance against current gated recurrent neural networks (RNN) and conventional non-neural network algorithms. The PG-LSTM can enhance AUC by up to 18.5% in the classification task and RMSE by up to 19.3% in the regression task over gated RNN models, respectively. Our findings are also statistically analysed using statistical significance testing with post hoc analysis.
科研通智能强力驱动
Strongly Powered by AbleSci AI