Abstract Efficient and accurate grasping remains a significant challenge due to the diverse object shapes, sizes, and poses encountered in practical scenarios. Previous grasp detection methods were limited regarding receptive fields. They have demonstrated insufficient capability in extracting relevant grasp features and have not effectively leveraged multi-scale features, resulting in limited detection accuracy. This paper introduces the Large-Kernel Residual Grasp Network (LKRG-Net), a novel network designed to address these challenges by integrating advanced feature extraction and fusion techniques. Firstly, the proposed model utilizes a dual-encoding UniRepLKNet-ResNet50 backbone encoding grasp features at global and local levels, ensuring comprehensive extraction of relevant characteristics. Secondly, a Grasp Fusion Splicing module effectively splices and merges the dual-encoded feature loss of crucial information. Finally, a Selective Fusion Feature Pyramid Network decoding multi-scale feature information enhances the utilization of shallow features while altering the information. Comprehensive testing on the Cornell Jacquard and Jacquard_V2 datasets shows that LKRG-Net surpasses existing advanced methods in accuracy and robustness. Grasping detection experiments conducted in real object scenarios further confirm the model's effectiveness in dynamic environments, providing a solid foundation for future advancements in robotic grasping tasks. The code can be obtained at the following address. https://github.com/Fyzyukk/LKRG-Net