计算机科学
特征(语言学)
卷积(计算机科学)
人工智能
光学(聚焦)
频道(广播)
卷积神经网络
模式识别(心理学)
骨干网
目标检测
领域(数学)
计算机视觉
人工神经网络
数学
电信
物理
哲学
光学
纯数学
语言学
作者
Lingyun Shen,Baihe Lang,Zhengxun Song
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:11: 125122-125137
被引量:15
标识
DOI:10.1109/access.2023.3330844
摘要
The improved YOLOv8 model (DCN_C2f+SC_SA+YOLOv8, hereinafter referred to as DS-YOLOv8) is proposed to address object detection challenges in complex remote sensing image tasks. It aims to overcome limitations such as the restricted receptive field caused by fixed convolutional kernels in the YOLO backbone network and the inadequate multi-scale feature learning capabilities resulting from the spatial and channel attention fusion mechanism’s inability to adapt to the input data’s feature distribution. The DS-YOLOv8 model introduces the Deformable Convolution C2f (DCN_C2f) module in the backbone network to enable adaptive adjustment of the network’s receptive field. Additionally, a lightweight Self-Calibrating Shuffle Attention (SC_SA) module is designed for spatial and channel attention mechanisms. This design choice allows for adaptive encoding of contextual information, preventing the loss of feature details caused by convolution iterations and improving the representation capability of multi-scale, occluded, and small object features. Moreover, the DS-YOLOv8 model incorporates the dynamic non-monotonic focus mechanism of Wise-IoU and employs a position regression loss function to further enhance its performance. Experimental results demonstrate the excellent performance of the DS-YOLOv8 model on various public datasets, including RSOD, NWPU VHR-10, DIOR, and VEDAI. The average mAP@0.5 values achieved are 97.7%, 92.9%, 89.7%, and 78.9%, respectively. Similarly, the average mAP@0.5:0.95 values are observed to be 74.0%, 64.3%, 70.7%, and 51.1%. Importantly, the model maintains real-time inference capabilities. In comparison to the YOLOv8 series models, the DS-YOLOv8 model demonstrates significant performance improvements and outperforms other mainstream models in terms of detection accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI