可解释性
不确定度量化
概率逻辑
计算机科学
人工智能
机器学习
可信赖性
变压器
先验概率
贝叶斯概率
一般化
不确定度分析
数据挖掘
工程类
数学
模拟
数学分析
电压
电气工程
计算机安全
作者
Yiming Xiao,Haidong Shao,Feng Ma,Te Han,Jiafu Wan,Bin Liu
标识
DOI:10.1016/j.jmsy.2023.07.012
摘要
To enable researchers to fully trust the decisions made by deep diagnostic models, interpretable rotating machinery fault diagnosis (RMFD) research has emerged. Existing interpretable RMFD research focuses on developing interpretable modules embedded in deep models to assign physical meaning to results, or on inferring the logic of the model to make decisions based on results. However, there is limited work on how to quantify uncertainty in results and explain its sources and composition. Uncertainty quantification and decomposition not only provide the confidence of the results, but also identify the source of unknown factors in the data, and consequently guide to enhance the interpretability and trustworthiness of models. Therefore, this paper proposes to use Bayesian variational learning to introduce uncertainty into the attention weights of Transformer to construct a probabilistic Bayesian Transformer for trustworthy RMFD. A probabilistic attention is designed and the corresponding optimization objective is defined, which can infer the prior and variational posterior distributions of attention weights, thus empowering the model to perceive uncertainty. An uncertainty quantification and decomposition scheme is developed to achieve confidence characterization of results and separation of epistemic and aleatoric uncertainty. The effectiveness of the proposed method is fully verified in three out-of-distribution generalization scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI