计算机科学
面部表情
可穿戴计算机
光容积图
可穿戴技术
情感计算
信号(编程语言)
人工智能
智能手表
话筒
人机交互
移动设备
语音识别
计算机视觉
嵌入式系统
操作系统
滤波器(信号处理)
电信
程序设计语言
声压
作者
Kangning Yang,Chaofan Wang,Yue Gu,Zhanna Sarsenbayeva,Benjamin Tag,Tilman Dingler,Greg Wadley,Jorge Gonçalves
出处
期刊:IEEE Transactions on Affective Computing
[Institute of Electrical and Electronics Engineers]
日期:2023-04-01
卷期号:14 (2): 1082-1097
被引量:31
标识
DOI:10.1109/taffc.2021.3100868
摘要
With the rapid development of mobile and wearable devices, it is increasingly possible to access users’ affective data in a more unobtrusive manner. On this basis, researchers have proposed various systems to recognize user’s emotional states. However, most of these studies rely on traditional machine learning techniques and a limited number of signals, leading to systems that either do not generalize well or would frequently lack sufficient information for emotion detection in realistic scenarios. In this paper, we propose a novel attention-based LSTM system that uses a combination of sensors from a smartphone (front camera, microphone, touch panel) and a wristband (photoplethysmography, electrodermal activity, and infrared thermopile sensor) to accurately determine user’s emotional states. We evaluated the proposed system by conducting a user study with 45 participants. Using collected behavioral (facial expression, speech, keystroke) and physiological (blood volume, electrodermal activity, skin temperature) affective responses induced by visual stimuli, our system was able to achieve an average accuracy of 89.2 percent for binary positive and negative emotion classification under leave-one-participant-out cross-validation. Furthermore, we investigated the effectiveness of different combinations of data signals to cover different scenarios of signal availability.
科研通智能强力驱动
Strongly Powered by AbleSci AI