计算机科学
人工智能
残余物
判别式
分类器(UML)
领域(数学分析)
模式识别(心理学)
机器学习
域适应
深度学习
公制(单位)
边距(机器学习)
数据挖掘
算法
数学
工程类
数学分析
运营管理
作者
Fu Song,Yongjian Zhang,Lin Lin,Minghang Zhao,Shisheng Zhong
标识
DOI:10.1016/j.ress.2021.108012
摘要
Currently developed unsupervised domain adaptation (UDA) methods have somewhat improved the prognostic performance of cross-domain RUL prediction, but only optimizing one single metric (MMD or adversarial mechanism) to reduce the domain discrepancy has limited further improvement. Moreover, learning a set of good features has been a long-standing issue in RUL prediction. To address these issues, an effective UDA method namely deep residual LSTM with Domain-invariance (DIDRLSTM) is investigated to improve the prognostic performance. First, the DRLSTM is designed as the feature extractor to learn high-level features from both source and target domains. The introduction of residual connections allows DRLSTM to add more nonlinear layers to learn the more representative degradation features. Second, two modules are integrated to further reduce the domain discrepancy. One is domain adaptation, which reduces the domain discrepancy by adding MK-MMD constraints to map the features to RHKS. The other is domain confusion, which reduces the domain discrepancy through minimizing the domain discriminative ability of the domain classifier trained under adversarial optimization strategy. Finally, the outstanding performance of DIDRLSTM is validated on C-MAPSS dataset and FEMTO-ST dataset. The experimental results show that the DIDRLSTM outperforms five state-of-the-art UDA methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI