Due to the less feature in small objects and feature loss during feature transmission, the detection of small objects is not satisfactory and is regarded as a challenging task in computer vision. Typically, multi-scale feature fusion methods are used to compensate the small object information. However, because there is a certain degree of misalignment in different scale features, traditional feature fusion methods suffer from aliasing effects, resulting in feature inconsistent deterioration, which is harmful to small target detection. Therefore, we propose an attention-guided feature alignment fusion module that utilizes features from adjacent scales to achieve efficient fusion of spatial and contextual information, alleviating the feature aliasing problem. In addition, we propose a shallow feature supplement module that uses attention mechanisms to guide small object information in low-level feature to be compensated in the bottom of the Neck without additional detection head, significantly improving the detection capability for small objections of the detector. Experiments on the MS COCO 2017 and VisDrone2019 datasets have demonstrated the superiority of our method. Specifically, compared to the baseline, we dramatically increased the AP50 and APS by 1.4 and 0.9 on COCO dataset, and the AP, AP50, and AP75 by 4.6, 4.3, and 5.2 on Visdrone2019 dataset.