计算机科学
强化学习
人工智能
深度学习
机器学习
作者
Guang Zheng,Yasong Li,Zheng Zhou,Ruqiang Yan
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-1
标识
DOI:10.1109/jiot.2024.3363610
摘要
Remaining useful life (RUL) prediction technology is a crucial task in prognostics and health management (PHM) systems, as it contributes to the enhancement of the reliability of equipment operation. With the development of Industrial Internet of Things (IIoT) technologies, it becomes possible to efficiently coordinate data collection for mechanical equipment, enabling real-time monitoring of device status and performance. This could provide more accurate estimations of the RUL. While current RUL prediction techniques predominantly rely on deep learning (DL), these approaches often neglect the temporal correlation within training samples, resulting in unstable prediction outcomes. To address this issue, a novel RUL prediction method is introduced, leveraging deep reinforcement learning (DRL). This method combines the effective feature extraction ability of DL with the preservation of temporal correlation between samples through reinforcement learning. Firstly, an autoencoder (AE) is employed to extract key features that are most relevant to degenerative process from the original signals collected from mechanical equipment. Secondly, state variables in reinforcement learning are constructed using the extracted features and the predicted RUL value of the sample at the previous time step. Finally, a deep reinforcement learning model based on the Twin Delayed Deep Deterministic Policy Gradient algorithm (TD3) is trained after setting an appropriate action space and reward function. Validation using XJTU-SY bearing dataset demonstrates that the DRL method yields lesser Root Mean Square Error (RMSE) and more stable prediction results compared to alternative methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI