The construction of deep neural networks depends on a significant number of parameters and computational complexity, which poses a challenge in the field of image processing. To address the issue of the Transformer network model's large size and inability to effectively capture local features of the image, this paper proposes a lightweight composite Transformer structure that combines a spectral feature refinement module (SFRM) and a parameterless attention augmentation module (PAAM). The SFRM and PAAM work together to improve the quality of the spectral features used in the transformer. The proposed structure aims to enhance the performance of the transformer without adding unnecessary complexity. The SFRM utilises the two-dimensional discrete cosine transform to convert the image from the spatial domain to the frequency domain. This process extracts both the overall image structure and detailed feature information from the high-frequency and low-frequency regions, respectively. The aim is to purify the spatially-insignificant features in the original image. The PAAM introduces a parameter-free channel, spatial, and 3D attention enhancement mechanism to extract correlation features of local information in the spatial domain without increasing the number of parameters. This improves the expression of local features in the image. Additionally, Depth Separable (DConv MLP) is introduced to further reduce the network model's weight. The experimental results show that the proposed algorithm achieves an accuracy of 79.6% on the ImageNet-1K dataset, 91.6% on the Oxford 102 Flower Dataset, and 94.1% on the CIFAR-10 dataset. Compared to ViT-B, Swin-T, and CSwin-T, respectively, the number of covariates decreases by 86.11%, 58.62%, and 47.83%. The number of parameters is also lower than VGG-16 and ResNet-110 by 91.07% and 77.70%, respectively.