厌恶
人工智能
计算机科学
深度学习
人工神经网络
面部表情
面子(社会学概念)
集合(抽象数据类型)
对象(语法)
机器学习
相关性(法律)
幸福
深层神经网络
模式识别(心理学)
心理学
社会心理学
程序设计语言
法学
社会学
愤怒
社会科学
政治学
作者
Katharina Weitz,Teena Hassan,Ute Schmid,Jens-Uwe Garbas
出处
期刊:Tm-technisches Messen
[Oldenbourg Wissenschaftsverlag]
日期:2019-06-18
卷期号:86 (7-8): 404-412
被引量:48
标识
DOI:10.1515/teme-2019-0024
摘要
Abstract Deep neural networks are successfully used for object and face recognition in images and videos. In order to be able to apply such networks in practice, for example in hospitals as a pain recognition tool, the current procedures are only suitable to a limited extent. The advantage of deep neural methods is that they can learn complex non-linear relationships between raw data and target classes without limiting themselves to a set of hand-crafted features provided by humans. However, the disadvantage is that due to the complexity of these networks, it is not possible to interpret the knowledge that is stored inside the network. It is a black-box learning procedure. Explainable Artificial Intelligence (AI) approaches mitigate this problem by extracting explanations for decisions and representing them in a human-interpretable form. The aim of this paper is to investigate the explainable AI methods Layer-wise Relevance Propagation (LRP) and Local Interpretable Model-agnostic Explanations (LIME). These approaches are applied to explain how a deep neural network distinguishes facial expressions of pain from facial expressions of emotions such as happiness and disgust.
科研通智能强力驱动
Strongly Powered by AbleSci AI