Xiang Li,Dawei Song,Peng Zhang,Guangliang Yu,Yuexian Hou,Bin Hu
出处
期刊:Bioinformatics and Biomedicine日期:2016-12-01被引量:116
标识
DOI:10.1109/bibm.2016.7822545
摘要
Automatic emotion recognition based on multi-channel neurophysiological signals, as a challenging pattern recognition task, is becoming an important computer-aided method for emotional disorder diagnoses in neurology and psychiatry. Traditional approaches require designing and extracting a range of features from single or multiple channel signals based on extensive domain knowledge. This may be an obstacle for non-domain experts. Moreover, traditional feature fusion method can not fully utilize correlation information between different channels. In this paper, we propose a preprocessing method that encapsulates the multi-channel neurophysiological signals into grid-like frames through wavelet and scalogram transform. We further design a hybrid deep learning model that combines the ‘Convolutional Neural Network (CNN)’ and ‘Recurrent Neural Network (RNN)’, for extracting task-related features, mining inter-channel correlation and incorporating contextual information from those frames. Experiments are carried out, in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Our results demonstrate the effectiveness of the proposed methods, with respect to the emotional dimensions of Valence and Arousal.