计算机科学
管道(软件)
噪音(视频)
雷达
面部表情
可穿戴计算机
计算机视觉
人工智能
电信
图像(数学)
嵌入式系统
程序设计语言
作者
Zhang Xi,Yu Zhang,Zhenguo Shi,Tao Gu
标识
DOI:10.1145/3570361.3592515
摘要
Facial expression recognition plays a vital role to enable emotional awareness in multimedia Internet of Things applications. Traditional camera or wearable sensor based approaches may compromise user privacy or cause discomfort. Recent device-free approaches open a promising direction by exploring Wi-Fi or ultrasound signals reflected from facial muscle movements, but limitations exist such as poor performance in presence of body motions and not being able to detect multiple targets. To bridge the gap, we propose mmFER, a novel millimeter wave (mmWave) radar based system that extracts facial muscle movements associated with mmWave signals to recognize facial expressions. We propose a novel dual-locating approach based on MIMO that explores spatial information from raw mmWave signals for face localization in space, eliminating ambient noise. In addition, collecting mmWave training data can be very costly in practice, and insufficient training dataset may lead to low accuracy. To overcome, we design a cross-domain transfer pipeline to enable effective and safe model knowledge transformation from image to mmWave. Extensive evaluations demonstrate that mmFER achieves an accuracy of 80.57% on average within a detection range between 0.3m and 2.5m, and it is robust to various real-world settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI