云计算
计算机科学
点云
预处理器
动作(物理)
点(几何)
人工智能
操作系统
数学
物理
几何学
量子力学
作者
Yen‐Ting Lai,Cheng-Hung Lin,Po‐Yung Chou
标识
DOI:10.1109/icce59016.2024.10444448
摘要
Point cloud action recognition has the advantage of being less affected by changes in lighting and viewing angle, as it focuses on the three-dimensional position of an object rather than pixel values. This enables robust recognition performance even in complex and dark environments. Additionally, point cloud action recognition finds widespread applications in fields such as robotics, virtual reality, autonomous driving, human-computer interaction, and game development. For instance, understanding human actions is crucial for better interaction and collaboration in robotics, while in virtual reality, it can capture and reproduce user movements to enhance realism and interactivity. To build a smoothly operating point cloud action recognition system, it is often necessary to filter out background and irrelevant points, resulting in clean and aligned data. In previous methods, point cloud filtering and action recognition were usually performed separately, with fewer systems operating together or action recognition without background filtering. In this paper, we propose a pipeline that enables users to directly acquire point cloud data from the Azure Kinect DK and perform comprehensive automated preprocessing. This generates cleaner point cloud data without background points, suitable for action recognition. Our approach utilizes PSTNet for point cloud action recognition and trains the model on the dataset obtained through automated preprocessing, which includes 12 action classes. Finally, we have developed a real-time point cloud action recognition system that combines automated point cloud preprocessing.
科研通智能强力驱动
Strongly Powered by AbleSci AI