计算机科学
支持向量机
脑电图
人工智能
模式识别(心理学)
加权
模态(人机交互)
分类器(UML)
模式
语音识别
特征(语言学)
特征向量
心理学
医学
社会学
哲学
放射科
精神科
语言学
社会科学
作者
Hanshu Cai,Zhidiao Qu,Zhe Li,Yi Zhang,Xiping Hu,Bin Hu
标识
DOI:10.1016/j.inffus.2020.01.008
摘要
This study aimed to construct a novel multimodal model by fusing different electroencephalogram (EEG) data sources, which were under neutral, negative and positive audio stimulation, to discriminate between depressed patients and normal controls. The EEG data of different modalities were fused using a feature-level fusion technique to construct a depression recognition model. The EEG signals of 86 depressed patients and 92 normal controls were recorded simultaneously while receiving different audio stimuli. Then, from the EEG signals of each modality, linear and nonlinear features were extracted and selected to obtain features of each modality. In addition, a linear combination technique was used to fuse the EEG features of different modalities to build a global feature vector and find several powerful features. Furthermore, genetic algorithms were used to perform feature weighting to improve the overall performance of the recognition framework. The classification accuracy of each classifier, namely the k-nearest neighbor (KNN), decision tree (DT), and support vector machine (SVM), was compared, and the results were encouraging. The highest classification accuracy of 86.98% was obtained by the KNN classifier in the fusion of positive and negative audio stimuli, demonstrating that the fusion modality could achieve higher depression recognition accuracy rate compared with the individual modality schemes. This study may provide an additional tool for identifying depression patients.
科研通智能强力驱动
Strongly Powered by AbleSci AI