可解释性
推论
计算机科学
机器学习
过程(计算)
人工智能
因果推理
约束(计算机辅助设计)
可追溯性
数据挖掘
数学
计量经济学
软件工程
几何学
操作系统
作者
Xianyong Yin,Wei He,You Cao,Guohui Zhou,Hongyu Li
标识
DOI:10.1016/j.ins.2023.119748
摘要
Safety state assessment is an important aspect of maintenance decisions for complex systems. However, assessing the safety state of systems is a challenge due to their complexity and the potential consequences of failure. One way to address this challenge is to use models with process interpretability and result traceability. This paper proposes an interpretable belief rule base model with reverse causal inference (IBRB-R) to provide a useful approach for safety state assessment. First, interpretability criteria are proposed for the safety state assessment method to regulate the interpretability of the entire modeling process. Second, based on the criteria, three interpretability constraint strategies are designed to correct behaviors that undermine model interpretability. Then, the inference method and optimization of the model are shown. In addition, a reverse causal inference model using the evidential reasoning (ER) algorithm is proposed to trace the causes of the assessment results and improve the reliability of the model. Finally, a case study on safety state assessment for the WD615 diesel engine verifies the validity of the proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI