YOLOv8 is one of the most commonly used object detection algorithms. However, its network model has a large number of parameters, resulting in slow performance on embedded devices. One of the challenges in industrial applications of this algorithm is reducing the parameter size of the YOLOv8 model without significantly compromising its detection accuracy, thus enabling it to run efficiently on embedded devices. To address this, a structured pruning strategy based on Torch-Pruning was designed specifically for medium-sized YOLOv8 models like YOLOv8m.In this study, the model was trained on the COCO dataset, resulting in a computational workload of 39.6G and a parameter count of 25.9M. Thirteen pruning iterations were conducted with different pruning rates to systematically reduce the model's parameter count and identify the optimal pruned model. Comparative analysis with the unpruned model showed promising results: the computational workload decreased from 39.6G to 33.7G, a reduction of 14.9%; the parameter count decreased from 25.9M to 22.0M, a reduction of 15%; the average precision improved from 0.6 before pruning to 0.7 after fine-tuning the pruned model parameters; and the inference time per image decreased from 10.2ms before pruning to 9.5ms afterward.