可解释性
人工智能
强化学习
计算机科学
机器学习
深度学习
清晰
反向传播
电力系统
概率逻辑
功率(物理)
人工神经网络
生物化学
量子力学
物理
化学
作者
Ke Zhang,Jun Zhang,Peidong Xu,Tianlu Gao,David Yang Gao
标识
DOI:10.1109/tcss.2021.3096824
摘要
Artificial intelligence (AI) technology has become an important trend to support the analysis and control of complex and time-varying power systems. Although deep reinforcement learning (DRL) has been utilized in the power system field, most of these DRL models are regarded as black boxes, which are difficult to explain and cannot be used on occasions when human operators need to participate. Using the explainable AI (XAI) technology to explain why power system models make certain decisions is as important as the accuracy of the decisions themselves because it ensures trust and transparency in the model decision-making process. The interpretability issue in DRL models in power system emergency control is discussed in this article. The proposed interpretable method is a backpropagation deep explainer based on Shapley additive explanations (SHAPs), which is named the Deep-SHAP method. The Deep-SHAP method is adopted to provide a reasonable interpretable model for a DRL-based emergency control application. For the DRL model, the importance of input features has been quantified to obtain contributions for the outcome of the model. Further, feature classification of the inputs and probabilistic analysis of the outputs in the XAI model is added to interpretability results for better clarity.
科研通智能强力驱动
Strongly Powered by AbleSci AI