计算机科学
步态
人工智能
分解
代表(政治)
融合
模式识别(心理学)
计算机视觉
语音识别
物理医学与康复
政治学
哲学
政治
生物
医学
语言学
法学
生态学
作者
Jianbo Xiong,Shinan Zou,Jin Tang
标识
DOI:10.1007/978-3-031-53311-2_28
摘要
Multimodal gait recognition aims to utilize various gait modalities for identity recognition. Previous methods have focused on designing complex fusion techniques. However, the heterogeneity between modalities has negatively impacted recognition tasks due to distributional differences and information redundancy. Inspired by this, we have proposed a novel feature decomposition fusion (DFGait) network, combining silhouette and skeleton data. The network learns modality-shared and modality-specific feature representations for both modalities and introduces inter-modality regularization loss and intra-modality regularization loss to encourage the preservation of common and unique information between modalities, reducing modality gaps and information redundancy. Furthermore, the representations mentioned above are embedded in their own space during learning, making the fusion process challenging. Therefore, we have proposed an adversarial modality alignment learning strategy, guiding the alignment of the two modality features through the confusion of the modality discriminator to achieve maximized modality information interaction. Finally, a separable fusion module is introduced to fuse the features of the two modalities, resulting in a comprehensive gait representation. Experimental results demonstrate that our DFGait achieves state-of-the-art performance on popular gait datasets, with rank-1 accuracy of 50.30% for Gait3D and 61.42% for GREW. The source code can be obtained from https://github.com/BoyeXiong/DFGait .
科研通智能强力驱动
Strongly Powered by AbleSci AI