计算机科学
人工智能
正规化(语言学)
目标检测
加权
模式识别(心理学)
对象(语法)
比例(比率)
机器学习
一致性(知识库)
差异(会计)
班级(哲学)
物理
会计
量子力学
业务
放射科
医学
作者
Qiushan Guo,Yao Mu,Jianyu Chen,Tianqi Wang,Yizhou Yu,Ping Luo
标识
DOI:10.1109/cvpr52688.2022.01412
摘要
Recent Semi-Supervised Object Detection (SS-OD) methods are mainly based on self-training, i.e., generating hard pseudo-labels by a teacher model on unlabeled data as supervisory signals. Although they achieved certain success, the limited labeled data in semi-supervised learning scales up the challenges of object detection. We analyze the challenges these methods meet with the empirical experiment results. We find that the massive False Negative samples and inferior localization precision lack consideration. Besides, the large variance of object sizes and class imbalance (i.e., the extreme ratio between back-ground and object) hinder the performance of prior arts. Further, we overcome these challenges by introducing a novel approach, Scale-Equivalent Distillation (SED), which is a simple yet effective end-to-end knowledge distillation framework robust to large object size variance and class imbalance. SED has several appealing benefits compared to the previous works. (1) SED imposes a consistency regularization to handle the large scale variance problem. (2) SED alleviates the noise problem from the False Negative samples and inferior localization precision. (3) A re-weighting strategy can implicitly screen the potential foreground regions of the unlabeled data to reduce the effect of class imbalance. Extensive experiments show that SED consistently outperforms the recent state-of-the-art methods on different datasets with significant margins. For example, it surpasses the supervised counterpart by more than 10 mAP when using 5% and 10% labeled data on MS-COCO.
科研通智能强力驱动
Strongly Powered by AbleSci AI