计算机科学
心态
主流
吞吐量
目标检测
软件部署
量化(信号处理)
人工智能
软件工程
电信
计算机视觉
哲学
神学
模式识别(心理学)
无线
作者
Chuyi Li,Lulu Li,Hongliang Jiang,Kaiheng Weng,Yifei Geng,Liang Li,Zaidan Ke,Qingyuan Li,Meng Cheng,Weiqiang Nie,Yiduo Li,Bo Zhang,Yufei Liang,Linyuan Zhou,Xiao–Ming Xu,Xiangxiang Chu,Xiaoming Wei,Xiaolin Wei
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:395
标识
DOI:10.48550/arxiv.2209.02976
摘要
For years, the YOLO series has been the de facto industry-level standard for efficient object detection. The YOLO community has prospered overwhelmingly to enrich its use in a multitude of hardware platforms and abundant scenarios. In this technical report, we strive to push its limits to the next level, stepping forward with an unwavering mindset for industry application. Considering the diverse requirements for speed and accuracy in the real environment, we extensively examine the up-to-date object detection advancements either from industry or academia. Specifically, we heavily assimilate ideas from recent network design, training strategies, testing techniques, quantization, and optimization methods. On top of this, we integrate our thoughts and practice to build a suite of deployment-ready networks at various scales to accommodate diversified use cases. With the generous permission of YOLO authors, we name it YOLOv6. We also express our warm welcome to users and contributors for further enhancement. For a glimpse of performance, our YOLOv6-N hits 35.9% AP on the COCO dataset at a throughput of 1234 FPS on an NVIDIA Tesla T4 GPU. YOLOv6-S strikes 43.5% AP at 495 FPS, outperforming other mainstream detectors at the same scale~(YOLOv5-S, YOLOX-S, and PPYOLOE-S). Our quantized version of YOLOv6-S even brings a new state-of-the-art 43.3% AP at 869 FPS. Furthermore, YOLOv6-M/L also achieves better accuracy performance (i.e., 49.5%/52.3%) than other detectors with a similar inference speed. We carefully conducted experiments to validate the effectiveness of each component. Our code is made available at https://github.com/meituan/YOLOv6.
科研通智能强力驱动
Strongly Powered by AbleSci AI