Pairwise Two-Stream ConvNets for Cross-Domain Action Recognition With Small Data

计算机科学 成对比较 人工智能 模式识别(心理学) 杠杆(统计) 机器学习 试验数据 数据挖掘 程序设计语言
作者
Zan Gao,Leming Guo,Tongwei Ren,An-An Liu,Zhiyong Cheng,Shengyong Chen
出处
期刊:IEEE transactions on neural networks and learning systems [Institute of Electrical and Electronics Engineers]
卷期号:33 (3): 1147-1161 被引量:18
标识
DOI:10.1109/tnnls.2020.3041018
摘要

In this work, we target cross-domain action recognition (CDAR) in the video domain and propose a novel end-to-end pairwise two-stream ConvNets (PTC) algorithm for real-life conditions, in which only a few labeled samples are available. To cope with the limited training sample problem, we employ pairwise network architecture that can leverage training samples from a source domain and, thus, requires only a few labeled samples per category from the target domain. In particular, a frame self-attention mechanism and an adaptive weight scheme are embedded into the PTC network to adaptively combine the RGB and flow features. This design can effectively learn domain-invariant features for both the source and target domains. In addition, we propose a sphere boundary sample-selecting scheme that selects the training samples at the boundary of a class (in the feature space) to train the PTC model. In this way, a well-enhanced generalization capability can be achieved. To validate the effectiveness of our PTC model, we construct two CDAR data sets (SDAI Action I and SDAI Action II) that include indoor and outdoor environments; all actions and samples in these data sets were carefully collected from public action data sets. To the best of our knowledge, these are the first data sets specifically designed for the CDAR task. Extensive experiments were conducted on these two data sets. The results show that PTC outperforms state-of-the-art video action recognition methods in terms of both accuracy and training efficiency. It is noteworthy that when only two labeled training samples per category are used in the SDAI Action I data set, PTC achieves 21.9% and 6.8% improvement in accuracy over two-stream and temporal segment networks models, respectively. As an added contribution, the SDAI Action I and SDAI Action II data sets will be released to facilitate future research on the CDAR task.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
黄辉冯发布了新的文献求助10
1秒前
龍鷹完成签到,获得积分10
2秒前
Coek完成签到,获得积分10
2秒前
邓佳鑫Alan应助猪猪hero采纳,获得10
3秒前
Lee应助猪猪hero采纳,获得10
3秒前
明亮元柏完成签到,获得积分10
3秒前
nephron完成签到,获得积分10
3秒前
李金奥完成签到 ,获得积分10
3秒前
婷婷完成签到,获得积分10
4秒前
王纯妍完成签到,获得积分10
4秒前
5秒前
好好完成签到,获得积分10
5秒前
西红柿完成签到,获得积分10
7秒前
汉堡包应助愤怒的含雁采纳,获得10
7秒前
傻傻的夜柳完成签到 ,获得积分10
7秒前
wang完成签到,获得积分10
8秒前
星辰大海应助过时的砖头采纳,获得10
8秒前
9秒前
9秒前
qixi完成签到,获得积分20
10秒前
10秒前
潇湘雪月完成签到,获得积分10
11秒前
柳叶洋完成签到,获得积分10
12秒前
13秒前
好运来完成签到,获得积分20
13秒前
酷酷的依波完成签到,获得积分10
13秒前
Jiqixuexi关注了科研通微信公众号
13秒前
境随心转完成签到,获得积分10
14秒前
火星上雨珍完成签到,获得积分10
14秒前
自由中蓝发布了新的文献求助10
15秒前
turtle完成签到 ,获得积分10
16秒前
17秒前
CodeCraft应助123采纳,获得10
17秒前
ly完成签到,获得积分10
17秒前
17秒前
任性的皮皮虾完成签到,获得积分10
18秒前
19秒前
20秒前
风中诺言完成签到,获得积分10
21秒前
爱听歌书芹完成签到,获得积分10
21秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Cronologia da história de Macau 1600
Decentring Leadership 1000
Lloyd's Register of Shipping's Approach to the Control of Incidents of Brittle Fracture in Ship Structures 1000
BRITTLE FRACTURE IN WELDED SHIPS 1000
Intentional optical interference with precision weapons (in Russian) Преднамеренные оптические помехи высокоточному оружию 1000
Atlas of Anatomy 5th original digital 2025的PDF高清电子版(非压缩版,大小约400-600兆,能更大就更好了) 1000
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 纳米技术 计算机科学 化学工程 生物化学 物理 复合材料 内科学 催化作用 物理化学 光电子学 细胞生物学 基因 电极 遗传学
热门帖子
关注 科研通微信公众号,转发送积分 6184421
求助须知:如何正确求助?哪些是违规求助? 8011724
关于积分的说明 16664207
捐赠科研通 5283697
什么是DOI,文献DOI怎么找? 2816584
邀请新用户注册赠送积分活动 1796376
关于科研通互助平台的介绍 1660883