计算机科学
人工智能
信号处理
特征提取
分类器(UML)
可穿戴计算机
芯片上的系统
模式识别(心理学)
数字信号处理
语音识别
嵌入式系统
计算机硬件
作者
Wei-Chih Li,Cheng-Jie Yang,Boting Liu,Wai-Chi Fang
标识
DOI:10.1109/embc46164.2021.9630979
摘要
Recently, deep learning algorithms have been used widely in emotion recognition applications. However, it is difficult to detect human emotions in real-time due to constraints imposed by computing power and convergence latency. This paper proposes a real-time affective computing platform that integrates an AI System-on-Chip (SoC) design and multimodal signal processing systems composed of electroencephalogram (EEG), electrocardiogram (ECG), and photoplethysmogram (PPG) signals. To extract the emotional features of the EEG, ECG, and PPG signals, we used a short-time Fourier transform (STFT) for the EEG signal and direct extraction using the raw signals for the ECG and PPG signals. The long-term recurrent convolution networks (LRCN) classifier was implemented in an AI SoC design and divided emotions into three classes: happy, angry, and sad. The proposed LRCN classifier reached an average accuracy of 77.41% for cross-subject validation. The platform consists of wearable physiological sensors and multimodal signal processors integrated with the LRCN SoC design. The area of the core and total power consumption of the LRCN chip was 1.13 x 1.14 mm2 and 48.24 mW, respectively. The on-chip training processing time and real-time classification processing time are 5.5 µs and 1.9 µs per sample. The proposed platform displays the classification results of emotion calculation on the graphical user interface (GUI) every one second for real-time emotion monitoring.Clinical relevance- The on-chip training processing time and real-time emotion classification processing time are 5.5 µs and 1.9 µs per sample with EEG, ECG, and PPG signal based on the LRCN model.
科研通智能强力驱动
Strongly Powered by AbleSci AI