可解释性
计算机科学
可信赖性
预言
人工智能
审计
机器学习
特征(语言学)
数据挖掘
计算机安全
语言学
哲学
经济
管理
作者
Kazuma Kobayashi,Syed Bahauddin Alam
标识
DOI:10.1016/j.engappai.2023.107620
摘要
Artificial intelligence (AI) and Machine learning (ML) are increasingly used for digital twin development in energy and engineering systems, but these models must be fair, unbiased, interpretable, and explainable. It is critical to have confidence in AI's trustworthiness. ML techniques have been useful in predicting important parameters and improving model performance. However, for these AI techniques to be useful in making decisions, they need to be audited, accounted for, and easy to understand. Therefore, the use of explainable AI (XAI) and interpretable machine learning (IML) is crucial for the accurate prediction of prognostics, such as remaining useful life (RUL), in a digital twin system to make it intelligent while ensuring that the AI model is transparent in its decision-making processes and that the predictions it generates can be understood and trusted by users. By using an explainable, interpretable, and trustworthy AI, intelligent digital twin systems can make more accurate predictions of RUL, leading to better maintenance and repair planning and, ultimately, improved system performance. This paper aims to explain the ideas of XAI and IML and justify the important role of AI/ML for the digital twin components, which requires XAI to understand the prediction better. This paper explains the importance and fundamentals of XAI and IML in both local and global aspects in terms of feature selection, model interpretability, and model diagnosis and validation to ensure the reliable use of trustworthy AI/ML applications for RUL prediction.
科研通智能强力驱动
Strongly Powered by AbleSci AI