Point cloud action recognition has the advantage of being less affected by changes in lighting and viewing angle, as it focuses on the three-dimensional position of an object rather than pixel values. This enables robust recognition performance even in complex and dark environments. Additionally, point cloud action recognition finds widespread applications in fields such as robotics, virtual reality, autonomous driving, human-computer interaction, and game development. For instance, understanding human actions is crucial for better interaction and collaboration in robotics, while in virtual reality, it can capture and reproduce user movements to enhance realism and interactivity. To build a smoothly operating point cloud action recognition system, it is often necessary to filter out background and irrelevant points, resulting in clean and aligned data. In previous methods, point cloud filtering and action recognition were usually performed separately, with fewer systems operating together or action recognition without background filtering. In this paper, we propose a pipeline that enables users to directly acquire point cloud data from the Azure Kinect DK and perform comprehensive automated preprocessing. This generates cleaner point cloud data without background points, suitable for action recognition. Our approach utilizes PSTNet for point cloud action recognition and trains the model on the dataset obtained through automated preprocessing, which includes 12 action classes. Finally, we have developed a real-time point cloud action recognition system that combines automated point cloud preprocessing.