自编码
判别式
人工智能
模式识别(心理学)
计算机科学
共同训练
标记数据
训练集
机器学习
特征(语言学)
特征学习
熵(时间箭头)
半监督学习
深度学习
哲学
物理
量子力学
语言学
作者
Amirhossein Berenji,Zahra Taghiyarrenani,Abbas Rohani Bastami
标识
DOI:10.1177/10775463231164445
摘要
Intelligent fault diagnosis (IFD) based on deep learning methods has shown excellent performance, however, the fact that their implementation requires massive amount of data and lack of sufficient labeled data, limits their real-world application. In this paper, we propose a two-step technique to extract fault discriminative features using unlabeled and a limited number of labeled samples for classification. To this end, we first train an Autoencoder (AE) using unlabeled samples to extract a set of potentially useful features for classification purpose and consecutively, a Contrastive Learning-based post-training is applied to make use of limited available labeled samples to improve the feature set discriminability. Our Experiments—on SEU bearing dataset—show that unsupervised feature learning using AEs improves classification performance. In addition, we demonstrate the effectiveness of the employment of contrastive learning to perform the post-training process; this strategy outperforms Cross-Entropy based post-training in limited labeled information cases.
科研通智能强力驱动
Strongly Powered by AbleSci AI