计算机科学
卷积神经网络
人工智能
机器学习
语音识别
作者
Lynn Vonder Haar,Timothy Elvira,Omar Ochoa
标识
DOI:10.1016/j.engappai.2022.105606
摘要
Deep learning models have gained a reputation of high accuracy in many domains. Convolutional Neural Networks (CNN) are specialized towards image recognition and have high accuracy in classifying objects within images. However, CNNs are an example of a black box model, meaning that experts are unsure how they work internally to reach a classification decision. Without knowing the reasoning behind a decision, there is low confidence that CNNs will continue to make accurate decisions, so it is unsafe to use them in high-risk or safety-critical safety–critical fields without first developing methods to explain their decisions. This paper is a survey and analysis of the available explainability methods for showing the reasoning behind CNN decisions.
科研通智能强力驱动
Strongly Powered by AbleSci AI