计算机科学
目标检测
稳健性(进化)
人工智能
推论
探测器
残余物
升级
对象(语法)
深度学习
计算机视觉
过程(计算)
比例(比率)
模式识别(心理学)
机器学习
算法
操作系统
物理
基因
化学
电信
量子力学
生物化学
标识
DOI:10.1016/j.eswa.2023.122256
摘要
In recent years, YOLO object detection models have undergone significant advancement due to the success of novel deep convolutional networks. The success of these YOLO models is often attributed to their use of guidance techniques, such as expertly tailored deeper backbone and meticulously crafted detector head, which provides effective mechanisms to tradeoff between accuracy and efficiency. However, these sluggish-reasoning models are not capable of handling false detection and negative phenomena, facing challenges include improving the robustness of scaled objects detection against occlude and densely sophisticated scenarios. To address these limitations, we propose a novel object detector, You Only Look Once and None Left (YOLO-NL). Our model includes a novel global dynamic label assignment strategy, which allocates labels for specific anchors to maintain a balance between higher precision detection and finer localization. To enhance the detection capability of multi-scale objects in complex scenes, we separately upgrade CSPNet and PANet using the shortest-longest gradient strategy and self-attention mechanism. To meet the need for fast inference, we propose the Rep-CSPNet network using the reparameterization method to convert residual convolutions to ghost linear operations. Additionally, we accelerate the feature extraction process by deploying the serial SSPP structure. The proposed model is robust to scale objects against negative effectives such as dust, dense, ambiguous, and obstructed scenes. YOLO-NL achieved a mAP of 52.9% on the COCO 2017 test dataset, exhibiting a significant improvement of 2.64% compared to the baseline YOLOX. It is worth noting that YOLO-NL can perform high-accuracy and high-speed face mask detection in real-life scenarios. The YOLO-NL model was employed on self-built FMD and large open-source datasets, and the results show that it outperforms the other state-of-the-art methods, achieving 98.8% accuracy while maintaining 130 FPS.
科研通智能强力驱动
Strongly Powered by AbleSci AI