计算机科学
模态(人机交互)
光谱图
瓶颈
人工智能
灵敏度(控制系统)
卷积神经网络
模式识别(心理学)
特征(语言学)
对偶(语法数字)
钥匙(锁)
情态动词
编码(集合论)
脑电图
水准点(测量)
地理
程序设计语言
高分子化学
艺术
化学
集合(抽象数据类型)
嵌入式系统
大地测量学
哲学
工程类
文学类
精神科
语言学
计算机安全
电子工程
心理学
作者
Jiale Wang,Xinting Ge,Yunfeng Shi,Mengxue Sun,Qingtao Gong,Haipeng Wang,Wenhui Huang
标识
DOI:10.1142/s0129065722500617
摘要
In recent years, deep learning has shown very competitive performance in seizure detection. However, most of the currently used methods either convert electroencephalogram (EEG) signals into spectral images and employ 2D-CNNs, or split the one-dimensional (1D) features of EEG signals into many segments and employ 1D-CNNs. Moreover, these investigations are further constrained by the absence of consideration for temporal links between time series segments or spectrogram images. Therefore, we propose a Dual-Modal Information Bottleneck (Dual-modal IB) network for EEG seizure detection. The network extracts EEG features from both time series and spectrogram dimensions, allowing information from different modalities to pass through the Dual-modal IB, requiring the model to gather and condense the most pertinent information in each modality and only share what is necessary. Specifically, we make full use of the information shared between the two modality representations to obtain key information for seizure detection and to remove irrelevant feature between the two modalities. In addition, to explore the intrinsic temporal dependencies, we further introduce a bidirectional long-short-term memory (BiLSTM) for Dual-modal IB model, which is used to model the temporal relationships between the information after each modality is extracted by convolutional neural network (CNN). For CHB-MIT dataset, the proposed framework can achieve an average segment-based sensitivity of 97.42%, specificity of 99.32%, accuracy of 98.29%, and an average event-based sensitivity of 96.02%, false detection rate (FDR) of 0.70/h. We release our code at https://github.com/LLLL1021/Dual-modal-IB.
科研通智能强力驱动
Strongly Powered by AbleSci AI