Nagashri N. Lakshminarayana,Nishant Sankaran,Srirangaraj Setlur,Venu Govindaraju
标识
DOI:10.1109/fg.2019.8756629
摘要
In this paper we present a feature aggregation method to combine the information from the visible light domain and the physiological signals for predicting the 12 facial action units in the MMSE dataset. Although multimodal affect analysis has gained lot of attention, the utility of physiological signals in recognizing facial action units is relatively unexplored. In this paper we investigate if physiological signals such as Electro Dermal Activity (EDA), Respiration Rate and Pulse Rate can be used as metadata for action unit recognition. We exploit the effectiveness of deep learning methods to learn an optimal combined representation that is derived from the individual modalities. We obtained an improved performance on MMSE dataset further validating our claim. To the best of our knowledge this is the first study on facial action unit recognition using physiological signals.