计算机科学
稳健性(进化)
学习迁移
人工智能
对抗制
机器学习
集合(抽象数据类型)
班级(哲学)
上下界
差异(会计)
传输(计算)
数据挖掘
数学
会计
数学分析
业务
基因
并行计算
生物化学
化学
程序设计语言
作者
Yafei Deng,Jun Lv,Dongyue Huang,Shichang Du
出处
期刊:Neurocomputing
[Elsevier]
日期:2023-09-01
卷期号:548: 126391-126391
被引量:32
标识
DOI:10.1016/j.neucom.2023.126391
摘要
Recently, deep transfer learning-based intelligent machine diagnosis has been well investigated, and the source and the target domain are commonly assumed to share the same fault categories, which can be called as the closed-set diagnosis transfer (CSDT). However, this assumption is hard to cover real engineering scenarios because some unknown new fault may occur unexpectedly due to the uncertainty and complexity of machinery components, which is called as the open-set diagnosis transfer (OSDT). To solve this challenging but more realistic problem, a Theory-guided Progressive Transfer Learning Network (TPTLN) is proposed in this paper. First, the upper bound of transfer learning model under open-set setting is thoroughly analyzed, which provides a theoretical insight to guide the model optimization. Second, a two-stage module is designed to carry out distracting unknown target samples and attracting known samples through progressive learning, which could effectively promote inter-class separability and intra-class compactness. The performance of proposed TPTLN is evaluated in two OSDT cases, where the diagnosis knowledge is transferred across bearings and gearbox running under different working conditions. Comparative results show that the proposed method achieves better robustness and diagnostic performance under different degrees of domain shift and openness variance. The source codes and links to the data can be found in the following GitHub repository: https://github.com/phoenixdyf/Theory-guided-Progressive-Transfer-LearningNetwork.
科研通智能强力驱动
Strongly Powered by AbleSci AI