失败
计算机科学
算法
还原(数学)
目标检测
最小边界框
GSM演进的增强数据速率
人工智能
实时计算
图像(数学)
模式识别(心理学)
数学
几何学
并行计算
作者
Lijia Cao,Pinde Song,Yongchao Wang,Yang Yang,Baoyu Peng
出处
期刊:Electronics
[MDPI AG]
日期:2023-05-18
卷期号:12 (10): 2274-2274
被引量:5
标识
DOI:10.3390/electronics12102274
摘要
Unmanned aerial vehicle (UAV) image detection algorithms are critical in performing military countermeasures and disaster search and rescue. The state-of-the-art object detection algorithm known as you only look once (YOLO) is widely used for detecting UAV images. However, it faces challenges such as high floating-point operations (FLOPs), redundant parameters, slow inference speed, and poor performance in detecting small objects. To address the above issues, an improved, lightweight, real-time detection algorithm was proposed based on the edge computing platform for UAV images. In the presented method, MobileNetV3 was used as the YOLOv5 backbone network to reduce the numbers of parameters and FLOPs. To enhance the feature extraction ability of MobileNetV3, the efficient channel attention (ECA) attention mechanism was introduced into MobileNetV3. Furthermore, in order to improve the detection ability for small objects, an extra prediction head was introduced into the neck structure, and two kinds of neck structures with different parameter scales were designed to meet the requirements of different embedded devices. Finally, the FocalEIoU loss function was introduced into YOLOv5 to accelerate bounding box regression and improve the localization accuracy of the algorithm. To validate the performance of the proposed improved algorithm, we compared our algorithm with other algorithms in the VisDrone-Det2021 dataset. The results showed that compared with YOLOv5s, MELF-YOLOv5-S achieved a 51.4% reduction in the number of parameters and a 38.6% decrease in the number of FLOPs. MELF-YOLOv5-L had 87.4% and 47.4% fewer parameters and FLOPs, respectively, and achieved higher detection accuracy than YOLOv5l.
科研通智能强力驱动
Strongly Powered by AbleSci AI