深度学习
脑电图
计算机科学
人工智能
卷积神经网络
背景(考古学)
机器学习
可视化
原始数据
数据科学
模式识别(心理学)
心理学
神经科学
生物
古生物学
程序设计语言
作者
Charles A. Ellis,Martina Lapera Sancho,Robyn L. Miller,Vince D. Calhoun
标识
DOI:10.1101/2024.03.19.585728
摘要
Deep learning methods are increasingly being applied to raw electroencephalogram (EEG) data. However, if these models are to be used in clinical or research contexts, methods to explain them must be developed, and if these models are to be used in research contexts, methods for combining explanations across large numbers of models must be developed to counteract the inherent randomness of existing training approaches. Model visualization-based explainability methods for EEG involve structuring a model architecture such that its extracted features can be characterized and have the potential to offer highly useful insights into the patterns that they uncover. Nevertheless, model visualization-based explainability methods have been underexplored within the context of multichannel EEG, and methods to combine their explanations across folds have not yet been developed. In this study, we present two novel convolutional neural network-based architectures and apply them for automated major depressive disorder diagnosis. Our models obtain slightly lower classification performance than a baseline architecture. However, across 50 training folds, they find that individuals with MDD exhibit higher β power, potentially higher δ power, and higher brain-wide correlation that is most strongly represented within the right hemisphere. This study provides multiple key insights into MDD and represents a significant step forward for the domain of explainable deep learning applied to raw EEG. We hope that it will inspire future efforts that will eventually enable the development of explainable EEG deep learning models that can contribute both to clinical care and novel medical research discoveries.
科研通智能强力驱动
Strongly Powered by AbleSci AI