可解释性
深度学习
人工智能
计算机科学
机器学习
数据科学
领域(数学)
数学
纯数学
作者
Kang Cheng,Ning Wang,Maozhen Li
出处
期刊:Lecture notes on data engineering and communications technologies
日期:2021-01-01
卷期号:: 475-486
被引量:6
标识
DOI:10.1007/978-3-030-70665-4_54
摘要
The interpretability research of deep learning is closely related to engineering, machine learning, mathematics, cognitive psychology and other disciplines. It has important theoretical research significance and practical application value in many fields such as information push, medical research, unmanned driving and information security. Past research has made some contributions to the black box problem of deep learning, but we still face a variety of challenges. For this reason, this paper first summarizes the history and related work of deep learning interpretability research. The present research status is introduced from three aspects: visual analysis, robust perturbation analysis and sensitivity analysis. This paper introduces the research on the construction of interpretable deep learning model from four aspects: model agent, logical reasoning, network node association analysis and traditional machine learning model. In addition, this paper also analyzes and discusses the shortcomings of the existing methods. Finally, the typical applications of interpretable deep learning are listed, and the possible future research directions in this field are prospected, and corresponding suggestions are put forward.
科研通智能强力驱动
Strongly Powered by AbleSci AI