计算机科学
判别式
编码器
频道(广播)
人工智能
钥匙(锁)
特征(语言学)
接头(建筑物)
模式识别(心理学)
特征提取
操作系统
语言学
计算机安全
工程类
哲学
计算机网络
建筑工程
作者
Man Yang,Lipeng Gan,Runze Cao,Xiaochao Li
标识
DOI:10.1109/jsen.2023.3303912
摘要
Action recognition provides an application for human action classification utilizing datasets captured by various sensor cameras. However, how to capture the key semantic features and subtle differences for fine-grained action recognition from redundant motion sequence is still a challenging task. To address this issue, we propose a novel bidirectional encoder representations from transformers (BERT)-based joint channel–temporal module to explore channel interaction correlation through channel–temporal embedded module and self-attention mechanism. The channel view branch is developed to capture the key channel semantic features through the interactive correlation between the subchannel feature sequences across frames. Our studies reveal that the channel interaction is crucial to explore these discriminative features among fine-grained action recognition categories. Furthermore, the channel view branch can work collaboratively with temporal view branch to take full advantage of channel interaction and channel–temporal dependencies through the joint learning via weight-sharing strategy. The proposed BERT-based joint channel–temporal module works in a plug-and-play way and can be integrated with 2-D backbones, such as temporal shift module (TSM), multiview fusion network (MVFNet), MotionSqueeze network (MSNet), and temporal difference network (TDN). Extensive experiments are carried out on the HMDB51, the MiniKinetics, the fine-grained Something-Something V1 & V2, and the multimodal N-UCLA datasets, and the results demonstrate the effectiveness of our joint channel–temporal module. Our method achieves 83.8%, 83.6%, 57.1%, and 68.2% top-1 accuracy on these single-modal datasets, respectively. The multimodal experiments on the N-UCLA dataset achieve 98.7% and 98.9% accuracy in RGB + skeleton and RGB + depth fusions.
科研通智能强力驱动
Strongly Powered by AbleSci AI