计算机科学
人工智能
动作识别
计算机视觉
噪音(视频)
不变(物理)
任务(项目管理)
视觉对象识别的认知神经科学
动作(物理)
模式识别(心理学)
差异(会计)
运动(物理)
对象(语法)
班级(哲学)
图像(数学)
数学
管理
经济
业务
会计
物理
量子力学
数学物理
作者
Jiang Wang,Zicheng Liu,Ying Wu,Junsong Yuan
标识
DOI:10.1109/cvpr.2012.6247813
摘要
Human action recognition is an important yet challenging task. The recently developed commodity depth sensorsopen up new possibilities of dealing with this problem but also present some unique challenges.The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions.In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance.In addition, novel features that are suitable for depth data are proposed.They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions.The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system.The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI