计算机科学
脑电图
人工智能
合并(版本控制)
面部表情
情绪识别
面部识别系统
特征提取
模式识别(心理学)
感知
语音识别
机器学习
计算机视觉
心理学
精神科
神经科学
情报检索
作者
Ying Tan,Zhe Sun,Feng Duan,Jordi Solé‐Casals,César F. Caiafa
标识
DOI:10.1016/j.bspc.2021.103029
摘要
Human-robot interaction (HRI) systems play a critical role in society. However, most HRI systems nowadays still face the challenge of disharmony, resulting in an inefficient communication between the human and the robot. In this paper, a multimodal emotion recognition method is proposed to establish an HRI system with a low sense of disharmony. This method is based on facial expressions and electroencephalography (EEG). The image classification method of facial expressions and the suitable feature extraction method of EEG were investigated based on the public datasets. And then these methods were applied to both images and EEG data acquired by ourselves. In addition, the Monte Carlo method was used to merge the results and solve the problem of having a small dataset. The multimodal emotion recognition method was combined with the HRI system, where it achieved a recognition rate of 83.33%. Furthermore, in order to evaluate the HRI system from the user's point of view, a perceptual assessment method was proposed to evaluate our system, in which the system was scored by the participants based on their experience, achieving an average score of 7 (the scores were ranged from 0 to 10). Experimental results demonstrate the effectiveness and feasibility of the multimodal emotion recognition method, which can be useful to reduce the sense of disharmony of HRI systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI