Compared to natural images, remote sensing images suffer from more severe detail loss and face greater challenges in image reconstruction. We proposes a super-resolution network for remote sensing images based on hybrid dilated convolution and adaptive pruning. The network expands the receptive field through hybrid dilated convolution to extract more detailed image features. Bilinear interpolation is employed to upsample the extracted features, while residual connections are introd uced to reduce information loss in the convolutional layers. A multi-scale attention fusion block is employed to enhance both local and global information, allowing for improved recovery of the details and overall structure of remote sensing images. The increased computational complexity and parameter count of the fused network make deployment on mobile and edge devices challenging. Therefore, a dual-channel adaptive pruning module is employed to prune redundant channel features, which enhances the model's lightweight nature and practicality while preserving image details. The experimental results indicate that the proposed method effectively reconstructs texture details in remote sensing images while maintaining a balanced trade-off between computational efficiency and the quality of super-resolution reconstruction.