可解释性
计算机科学
脑电图
人工智能
卷积神经网络
深度学习
机器学习
任务(项目管理)
人工神经网络
心理学
神经科学
管理
经济
作者
Chen Cui,Ying Zhang,Shenghua Zhong
标识
DOI:10.1109/cbms55023.2022.00037
摘要
Despite achieving success in many domains, deep learning models remain mostly black boxes. However, understanding the reasons behind predictions is quite important in assessing trust, which is fundamental in the EEG analysis task. In this work, we propose to use two representative explanation approaches, including LIME and Grad-CAM, to explain the predictions of a simple convolutional neural network on an EEG-based emotional brain-computer interface. Our results demonstrate the interpretability approaches provide the understanding of which features better discriminate the target emotions and provide insights into the neural processes involved in the model learned behaviors.
科研通智能强力驱动
Strongly Powered by AbleSci AI