计算机科学
成对比较
人工智能
模式识别(心理学)
杠杆(统计)
机器学习
试验数据
数据挖掘
程序设计语言
作者
Zan Gao,Leming Guo,Tongwei Ren,An-An Liu,Zhiyong Cheng,Shengyong Chen
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2022-03-01
卷期号:33 (3): 1147-1161
被引量:18
标识
DOI:10.1109/tnnls.2020.3041018
摘要
In this work, we target cross-domain action recognition (CDAR) in the video domain and propose a novel end-to-end pairwise two-stream ConvNets (PTC) algorithm for real-life conditions, in which only a few labeled samples are available. To cope with the limited training sample problem, we employ pairwise network architecture that can leverage training samples from a source domain and, thus, requires only a few labeled samples per category from the target domain. In particular, a frame self-attention mechanism and an adaptive weight scheme are embedded into the PTC network to adaptively combine the RGB and flow features. This design can effectively learn domain-invariant features for both the source and target domains. In addition, we propose a sphere boundary sample-selecting scheme that selects the training samples at the boundary of a class (in the feature space) to train the PTC model. In this way, a well-enhanced generalization capability can be achieved. To validate the effectiveness of our PTC model, we construct two CDAR data sets (SDAI Action I and SDAI Action II) that include indoor and outdoor environments; all actions and samples in these data sets were carefully collected from public action data sets. To the best of our knowledge, these are the first data sets specifically designed for the CDAR task. Extensive experiments were conducted on these two data sets. The results show that PTC outperforms state-of-the-art video action recognition methods in terms of both accuracy and training efficiency. It is noteworthy that when only two labeled training samples per category are used in the SDAI Action I data set, PTC achieves 21.9% and 6.8% improvement in accuracy over two-stream and temporal segment networks models, respectively. As an added contribution, the SDAI Action I and SDAI Action II data sets will be released to facilitate future research on the CDAR task.
科研通智能强力驱动
Strongly Powered by AbleSci AI