恶劣天气
稳健性(进化)
探测器
目标检测
预处理器
特征学习
计算机科学
深度学习
机器学习
模式识别(心理学)
计算机视觉
人工智能
气象学
电信
地理
生物化学
基因
化学
作者
Lucai Wang,Hongda Qin,Xuanyu Zhou,Xiao Lu,Fengting Zhang
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:: 1-1
被引量:21
标识
DOI:10.1109/tim.2022.3229717
摘要
Learning a robust object detector in adverse weather with real-time efficiency is of great importance for the visual perception task for autonomous driving systems. In this article, we propose a framework to improve the YOLO to a robust detector, denoted as R(obust)-YOLO, without the need for annotations in adverse weather. Considering the distribution gap between the normal weather images and the adverse weather images, our framework consists of an image quasi-translation network (QTNet) and a feature calibration network (FCNet) for adapting the normal weather domain to the adverse weather domain gradually. Specifically, we use the simple yet effective QTNet for generating images that inherit the annotations in the normal weather domain and interpolate the gap between the two domains. Then, in FCNet, we propose two kinds of adversarial-learning-based feature calibration modules to effectively align the feature representations in two domains in a local-to-global manner. With such a learning framework, our R-YOLO does not change the original YOLO structure, and thus it is applicable to all the YOLO-series detectors. Extensive experimental results of our R-YOLOv3, R-YOLOv5, and R-YOLOX on both the hazy and rainy datasets show that our method outperforms other detectors with dehaze/derain as the preprocessing step and other unsupervised domain adaptation (UDA)-based detectors, which confirms the effectiveness of our method on improving the robustness by only leveraging the unlabeled adverse weather images. Our code and pretrained models are available at: https://github.com/qinhongda8/R-YOLO .
科研通智能强力驱动
Strongly Powered by AbleSci AI