超复数
计算机科学
人工智能
价(化学)
特征(语言学)
唤醒
人工神经网络
机器学习
模式识别(心理学)
心理学
哲学
物理
量子力学
语言学
神经科学
四元数
数学
几何学
作者
Eleonora Lopez,Eleonora Chiarantano,Eleonora Grassucci,Danilo Comminiello
标识
DOI:10.1109/icasspw59220.2023.10193329
摘要
Multimodal emotion recognition from physiological signals is receiving an increasing amount of attention due to the impossibility to control them at will unlike behavioral reactions, thus providing more reliable information. Existing deep learning-based methods still rely on extracted handcrafted features, not taking full advantage of the learning ability of neural networks, and often adopt a single-modality approach, while human emotions are inherently expressed in a multimodal way. In this paper, we propose a hypercomplex multimodal network equipped with a novel fusion module comprising parameterized hypercomplex multiplications. Indeed, by operating in a hypercomplex domain the operations follow algebraic rules which allow to model latent relations among learned feature dimensions for a more effective fusion step. We perform classification of valence and arousal from electroencephalogram (EEG) and peripheral physiological signals, employing the publicly available database MAHNOB-HCI surpassing a multimodal state-of-the-art network. The code of our work is freely available at https://github.com/ispamm/MHyEEG.
科研通智能强力驱动
Strongly Powered by AbleSci AI