计算机科学
人工智能
姿势
特征(语言学)
背景(考古学)
RGB颜色模型
动作(物理)
计算机视觉
动作识别
模棱两可
关节式人体姿态估计
序列(生物学)
模式识别(心理学)
三维姿态估计
生物
物理
哲学
古生物学
量子力学
程序设计语言
遗传学
班级(哲学)
语言学
作者
Gyeongsik Moon,Heeseung Kwon,Kyoung Mu Lee,Minsu Cho
标识
DOI:10.1109/cvprw53098.2021.00372
摘要
Most current action recognition methods heavily rely on appearance information by taking an RGB sequence of entire image regions as input. While being effective in exploiting contextual information around humans, e.g., human appearance and scene category, they are easily fooled by out-of-context action videos where the contexts do not exactly match with target actions. In contrast, pose-based methods, which take a sequence of human skeletons only as input, suffer from inaccurate pose estimation or ambiguity of human pose per se. Integrating these two approaches has turned out to be non-trivial; training a model with both appearance and pose ends up with a strong bias towards appearance and does not generalize well to unseen videos. To address this problem, we propose to learn pose-driven feature integration that dynamically combines appearance and pose streams by observing pose features on the fly. The main idea is to let the pose stream decide how much and which appearance information is used in integration based on whether the given pose information is reliable or not. We show that the proposed IntegralAction achieves highly robust performance across in-context and out-of-context action video datasets. The codes are available in here.
科研通智能强力驱动
Strongly Powered by AbleSci AI