领域(数学分析)
人工智能
计算机科学
计算机视觉
对象(语法)
目标检测
编码(集合论)
域适应
一致性(知识库)
适应(眼睛)
探测器
模式识别(心理学)
数学
数学分析
电信
物理
集合(抽象数据类型)
分类器(UML)
光学
程序设计语言
作者
Huayi Zhou,Fei Jiang,Hongtao Lu
标识
DOI:10.1016/j.cviu.2023.103649
摘要
Domain adaptive object detection (DAOD) aims to alleviate transfer performance degradation caused by the cross-domain discrepancy. However, most existing DAOD methods are dominated by outdated and computationally intensive two-stage Faster R-CNN, which is not the first choice for industrial applications. In this paper, we propose a novel semi-supervised domain adaptive YOLO (SSDA-YOLO) based method to improve cross-domain detection performance by integrating the compact one-stage stronger detector YOLOv5 with domain adaptation. Specifically, we adapt the knowledge distillation framework with the Mean Teacher model to assist the student model in obtaining instance-level features of the unlabeled target domain. We also utilize the scene style transfer to cross-generate pseudo images in different domains for remedying image-level differences. In addition, an intuitive consistency loss is proposed to further align cross-domain predictions. We evaluate SSDA-YOLO on public benchmarks including PascalVOC, Clipart1k, Cityscapes, and Foggy Cityscapes. Moreover, to verify its generalization, we conduct experiments on yawning detection datasets collected from various real classrooms. The results show considerable improvements of our method in these DAOD tasks, which reveals both the effectiveness of proposed adaptive modules and the urgency of applying more advanced detectors in DAOD. Our code is available on https://github.com/hnuzhy/SSDA-YOLO.
科研通智能强力驱动
Strongly Powered by AbleSci AI