自编码
循环神经网络
计算机科学
深度学习
卷积神经网络
人工智能
编码器
时间序列
系列(地层学)
数据压缩
模式识别(心理学)
人工神经网络
算法
机器学习
古生物学
生物
操作系统
作者
Zhong Zheng,Zijun Zhang
标识
DOI:10.1016/j.asoc.2023.110797
摘要
The sharply growing volume of time series data due to recent sensing technology advancement poses emerging challenges to the data transfer speed and storage as well as corresponding energy consumption. To tackle the overwhelming volume of time series data in transmission and storage, compressing time series, which encodes time series into smaller size representations while enables authentic restoration of compressed ones with minimizing the reconstruction error, has attracted significant attention. Numerous methods have been developed and recent deep learning ones with minimal assumptions on data characteristics, such as recurrent autoencoders, have shown themselves to be competitive. Yet, capturing long-term dependencies in time series compression is a significant challenge calling further development. To make a response, this paper proposes a temporal convolutional recurrent autoencoder framework for more effective time series compression. First, two autoencoder modules, the temporal convolutional network encoder with a recurrent neural network decoder (TCN-RNN) and the temporal convolutional network encoder with an attention assisted recurrent neural network decoder (TCN-ARNN), are developed. The TCN-RNN employs only the recurrent neural network decoder to reconstruct the time series in reverse order. In contrast, the TCN-ARNN uses two recurrent neural networks to reconstruct the time series in both forward and reverse order in parallel. In addition, a timestep-wise attention network is developed to incorporate the forward and reverse reconstructions into the ultimate reconstruction with adaptive weights. Finally, a model selection procedure is developed to adaptively select between the TCN-RNN and TCN-ARNN based on their reconstruction performance on the validation dataset. Computational experiments on five datasets show that the proposed temporal convolutional recurrent autoencoder outperforms state-of-the-art benchmarking models in terms of lower reconstruction errors with the same compression ratio, achieving an improvement of up to 45.14% in the average of mean squared errors. Results indicate a promising potential of the proposed temporal convolutional recurrent autoencoder on the time series compression for various applications involving long time series data.
科研通智能强力驱动
Strongly Powered by AbleSci AI