Recently, deep learning algorithms have been used widely in emotion recognition applications. However, it is difficult to detect human emotions in real-time due to constraints imposed by computing power and convergence latency. This paper proposes a real-time affective computing platform that integrates an AI System-on-Chip (SoC) design and multimodal signal processing systems composed of electroencephalogram (EEG), electrocardiogram (ECG), and photoplethysmogram (PPG) signals. To extract the emotional features of the EEG, ECG, and PPG signals, we used a short-time Fourier transform (STFT) for the EEG signal and direct extraction using the raw signals for the ECG and PPG signals. The long-term recurrent convolution networks (LRCN) classifier was implemented in an AI SoC design and divided emotions into three classes: happy, angry, and sad. The proposed LRCN classifier reached an average accuracy of 77.41% for cross-subject validation. The platform consists of wearable physiological sensors and multimodal signal processors integrated with the LRCN SoC design. The area of the core and total power consumption of the LRCN chip was 1.13 x 1.14 mm2 and 48.24 mW, respectively. The on-chip training processing time and real-time classification processing time are 5.5 µs and 1.9 µs per sample. The proposed platform displays the classification results of emotion calculation on the graphical user interface (GUI) every one second for real-time emotion monitoring.Clinical relevance- The on-chip training processing time and real-time emotion classification processing time are 5.5 µs and 1.9 µs per sample with EEG, ECG, and PPG signal based on the LRCN model.