清晨好,您是今天最早来到科研通的研友!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您科研之路漫漫前行!

AC-CfC: An attention-based convolutional closed-form continuous-time neural network for raw multi-channel EEG-based emotion recognition

脑电图 卷积神经网络 计算机科学 频道(广播) 语音识别 模式识别(心理学) 人工智能 心理学 神经科学 电信
作者
Yiwu Wang,Yingyue Zhou,Weikun Lu,Qiao Wu,Qiang Li,Runfeng Zhang
出处
期刊:Biomedical Signal Processing and Control [Elsevier]
卷期号:94: 106249-106249 被引量:6
标识
DOI:10.1016/j.bspc.2024.106249
摘要

Emotion recognition based on electroencephalogram (EEG) is a critical task in the field of affective brain-computer interfaces. However, due to the non-stationarity and individual variability of EEG, hand-designed features cannot adequately capture the nonlinear and high-dimensional properties of raw EEG. Currently, spatial–temporal models have been verified to effectively capture the spatial–temporal information for EEG. However, these models are characterized by complex structures, large parameter counts, and the need for extensive training data. To overcome these disadvantages, this paper propose an end-to-end spatial–temporal model for emotion recognition based on raw EEG, called attention-based convolutional closed-form continuous-time neural network (AC-CfC). This model employs a channel attention mechanism to weight and encode EEG, capturing channel dependencies. Subsequently, one-dimensional convolutional neural networks and closed-form continuous-time neural networks are used to extract deep spatial–temporal features. Additionally, an adaptive loss-controlling mechanism is designed to enhance the model's decision-making ability between classes that are easily confused. To verify the effectiveness of the proposed model, experiments are conducted on the DEAP and DREAMER datasets. The average accuracies of the proposed model reach to 94.76% and 93.01% for valence and arousal in subject-independent experiments on the DEAP dataset, with an improvement of 11.8% and 8.73% respectively compared with the state-of-art model ACRNN. On the DREAMER dataset, the average accuracies reach to 81.83%, 81.22%, and 80.63% for valence, arousal and dominance, with an improvement of 2.55%, 6.64%, and 6.98% respectively over ACRNN. These results show that our proposed model exhibits better performance than some state-of-art models in the same category.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
丘比特应助落伍少年采纳,获得10
26秒前
38秒前
Sandstorm发布了新的文献求助10
42秒前
完美世界应助Sandstorm采纳,获得10
51秒前
zjw完成签到 ,获得积分10
1分钟前
zxcvvbb1001完成签到 ,获得积分10
1分钟前
小洛完成签到 ,获得积分10
1分钟前
所所应助科研通管家采纳,获得10
1分钟前
2分钟前
2分钟前
2分钟前
2分钟前
2分钟前
Lucas应助开放的果汁采纳,获得10
2分钟前
2分钟前
神火发布了新的文献求助10
2分钟前
上官若男应助ENIGMA__K采纳,获得10
3分钟前
沿途有你完成签到 ,获得积分10
3分钟前
jiuyang发布了新的文献求助10
4分钟前
希望天下0贩的0应助jiuyang采纳,获得10
4分钟前
5分钟前
5分钟前
5分钟前
沉沉完成签到 ,获得积分0
5分钟前
5分钟前
jiuyang发布了新的文献求助10
5分钟前
NexusExplorer应助科研通管家采纳,获得10
5分钟前
ding应助jiuyang采纳,获得10
6分钟前
6分钟前
6分钟前
糊涂虫发布了新的文献求助10
6分钟前
6分钟前
zw完成签到,获得积分10
6分钟前
在水一方应助jiuyang采纳,获得10
6分钟前
6分钟前
Sandstorm发布了新的文献求助10
6分钟前
CipherSage应助jiuyang采纳,获得10
7分钟前
7分钟前
7分钟前
7分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Modern Epidemiology, Fourth Edition 5000
Kinesiophobia : a new view of chronic pain behavior 5000
Molecular Biology of Cancer: Mechanisms, Targets, and Therapeutics 3000
Digital Twins of Advanced Materials Processing 2000
Propeller Design 2000
Weaponeering, Fourth Edition – Two Volume SET 2000
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 纳米技术 化学工程 生物化学 物理 计算机科学 内科学 复合材料 催化作用 物理化学 光电子学 电极 冶金 细胞生物学 基因
热门帖子
关注 科研通微信公众号,转发送积分 6012969
求助须知:如何正确求助?哪些是违规求助? 7575508
关于积分的说明 16139547
捐赠科研通 5160011
什么是DOI,文献DOI怎么找? 2763228
邀请新用户注册赠送积分活动 1742840
关于科研通互助平台的介绍 1634175