人工智能
计算机科学
特征学习
特征(语言学)
学习迁移
模式识别(心理学)
像素
不变(物理)
领域(数学分析)
域适应
目标检测
计算机视觉
视觉对象识别的认知神经科学
光学(聚焦)
特征提取
数学
分类器(UML)
哲学
数学分析
物理
光学
语言学
数学物理
作者
Wenwen Zhang,Jiangong Wang,Yutong Wang,Fei–Yue Wang
出处
期刊:IEEE Transactions on Intelligent Transportation Systems
[Institute of Electrical and Electronics Engineers]
日期:2022-11-01
卷期号:23 (11): 20217-20229
被引量:6
标识
DOI:10.1109/tits.2022.3176397
摘要
Recognizing and locating objects by algorithms are essential and challenging issues for Intelligent Transportation Systems. However, the increasing demand for much labeled data hinders the further application of deep learning-based object detection. One of the optimal solutions is to train the target model with an existing dataset and then adapt it to new scenes, namely Unsupervised Domain Adaptation (UDA). However, most of existing methods at the pixel level mainly focus on adapting the model from source domain to target domain and ignore the essence of UDA to learn domain-invariant feature learning. Meanwhile, almost all methods at the feature level ignore to make conditional distributions matched for UDA while conducting feature alignment between source and target domain. Considering these problems, this paper proposes the ParaUDA, a novel framework of learning invariant representations for UDA in two aspects: pixel level and feature level. At the pixel level, we adopt CycleGAN to conduct domain transfer and convert the problem of original unsupervised domain adaptation to supervised domain adaptation. At the feature level, we adopt an adversarial adaption model to learn domain-invariant representation by aligning the distributions of domains between different image pairs with same mixture distributions. We evaluate our proposed framework in different scenes, from synthetic scenes to real scenes, from normal weather to challenging weather, and from scenes across cameras. The results of all the above experiments show that ParaUDA is effective and robust for adapting object detection models from source scenes to target scenes.
科研通智能强力驱动
Strongly Powered by AbleSci AI