计算机科学
人工智能
可解释性
机器学习
模式识别(心理学)
特征提取
半监督学习
经济短缺
监督学习
数据挖掘
人工神经网络
哲学
语言学
政府(语言学)
作者
Long Cui,Xincheng Tian,Qingzhe Wei,Yan Liu
标识
DOI:10.1016/j.eswa.2023.121645
摘要
The shortage of labeled data is a major obstacle to the practical application of advanced fault diagnosis technologies, and the large amount of unlabeled data may be the key to solving this problem. This paper proposes a self-attention based contrastive leaning method for bearing fault diagnosis which utilizes the unlabeled data for self-supervised learning. Using the self-attention-based signal transformer as the backbone, the proposed method is able to learn feature extraction capability from a large number of unlabeled data by contrastive learning using only positive samples. Then using a small number of labeled data for fine-tuning, the proposed method can perform accurate fault diagnosis. Experiments using both run-to-failure and artificial fault vibration signal datasets show that the proposed method can not only outperform other semi-supervised or self-supervised learning methods but also exceed the accuracy of supervised learning methods in case of insufficient labels. The visualization shows the interpretability of the model and the feature extraction ability obtained from self-supervised pre-training.
科研通智能强力驱动
Strongly Powered by AbleSci AI