计算机科学
情绪识别
模式治疗法
人机交互
人工智能
语音识别
心理学
心理治疗师
作者
Ya Li,Jianhua Tao,Björn Schuller,Shiguang Shan,Dongmei Jiang,Jia Jia
出处
期刊:Affective Computing and Intelligent Interaction
日期:2018-05-01
被引量:33
标识
DOI:10.1109/aciiasia.2018.8470342
摘要
This paper introduces baselines for the Multimodal Emotion Recognition Challenge (MEC) 2017, which is a part of the first Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia) 2018. The aim of MEC 2017 is to improve the performance of emotion recognition in real-world conditions. The Chinese Natural Audio-Visual Emotion Database (CHEAVD) 2.0 is utilized as the challenge database, which is an extension of CHEAVD as released in MEC 2016. MEC 2017 has three sub-challenges and 31 teams participate in either all or part of them. 27 teams, 16 teams and 17 teams participate in audio (only), video (only) and multimodal emotion recognition sub-challenges, respectively. Baseline scores of the audio (only) and the video (only) sub-challenges are generated from Support Vector Machines (SVM) where audio features and video features are considered separately. In the multimodal sub-challenge, feature-level fusion and decision-level fusion are both utilized. The baselines of the audio (only), the video (only) and the multimodal sub-challenges are 39.2%, 21.7% and 35.7% in macro average precision.
科研通智能强力驱动
Strongly Powered by AbleSci AI