ParaUDA: Invariant Feature Learning With Auxiliary Synthetic Samples for Unsupervised Domain Adaptation

人工智能 计算机科学 特征学习 特征(语言学) 学习迁移 模式识别(心理学) 像素 不变(物理) 领域(数学分析) 域适应 目标检测 计算机视觉 视觉对象识别的认知神经科学 光学(聚焦) 特征提取 数学 分类器(UML) 哲学 数学分析 物理 光学 语言学 数学物理
作者
Wenwen Zhang,Jiangong Wang,Yutong Wang,Fei–Yue Wang
出处
期刊:IEEE Transactions on Intelligent Transportation Systems [Institute of Electrical and Electronics Engineers]
卷期号:23 (11): 20217-20229 被引量:6
标识
DOI:10.1109/tits.2022.3176397
摘要

Recognizing and locating objects by algorithms are essential and challenging issues for Intelligent Transportation Systems. However, the increasing demand for much labeled data hinders the further application of deep learning-based object detection. One of the optimal solutions is to train the target model with an existing dataset and then adapt it to new scenes, namely Unsupervised Domain Adaptation (UDA). However, most of existing methods at the pixel level mainly focus on adapting the model from source domain to target domain and ignore the essence of UDA to learn domain-invariant feature learning. Meanwhile, almost all methods at the feature level ignore to make conditional distributions matched for UDA while conducting feature alignment between source and target domain. Considering these problems, this paper proposes the ParaUDA, a novel framework of learning invariant representations for UDA in two aspects: pixel level and feature level. At the pixel level, we adopt CycleGAN to conduct domain transfer and convert the problem of original unsupervised domain adaptation to supervised domain adaptation. At the feature level, we adopt an adversarial adaption model to learn domain-invariant representation by aligning the distributions of domains between different image pairs with same mixture distributions. We evaluate our proposed framework in different scenes, from synthetic scenes to real scenes, from normal weather to challenging weather, and from scenes across cameras. The results of all the above experiments show that ParaUDA is effective and robust for adapting object detection models from source scenes to target scenes.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
小哦嘿应助科研通管家采纳,获得10
刚刚
浮游应助科研通管家采纳,获得10
刚刚
脑洞疼应助科研通管家采纳,获得10
1秒前
浮游应助科研通管家采纳,获得10
1秒前
平贝花应助科研通管家采纳,获得10
1秒前
小马甲应助科研通管家采纳,获得10
1秒前
cc应助科研通管家采纳,获得30
1秒前
科研通AI6应助科研通管家采纳,获得20
1秒前
天天快乐应助科研通管家采纳,获得10
1秒前
共享精神应助缥缈傥采纳,获得10
2秒前
李爱国应助科研通管家采纳,获得10
2秒前
CipherSage应助科研通管家采纳,获得10
2秒前
菌子锅发布了新的文献求助10
2秒前
浮游应助科研通管家采纳,获得10
2秒前
乐乐应助科研通管家采纳,获得10
2秒前
慕青应助科研通管家采纳,获得10
2秒前
2秒前
2秒前
走走发布了新的文献求助10
3秒前
6秒前
跳跃的血茗完成签到 ,获得积分10
6秒前
bkagyin应助棋士采纳,获得10
7秒前
7秒前
顾矜应助典雅的悟空采纳,获得10
7秒前
威fly完成签到,获得积分10
10秒前
缥缈傥发布了新的文献求助10
11秒前
量子星尘发布了新的文献求助10
12秒前
隐形之玉完成签到,获得积分10
13秒前
小铃铛发布了新的文献求助50
14秒前
菌子锅完成签到,获得积分20
14秒前
14秒前
州州完成签到,获得积分10
14秒前
Boro完成签到,获得积分10
16秒前
16秒前
yydlt完成签到,获得积分10
17秒前
小二郎应助llll采纳,获得10
17秒前
harry应助纸轮采纳,获得10
18秒前
乐乐应助蔚蓝天空采纳,获得10
18秒前
无花果应助可爱的兔兔采纳,获得10
18秒前
19秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
2025-2031全球及中国金刚石触媒粉行业研究及十五五规划分析报告 9000
Encyclopedia of the Human Brain Second Edition 8000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1000
The Composition and Relative Chronology of Dynasties 16 and 17 in Egypt 1000
Translanguaging in Action in English-Medium Classrooms: A Resource Book for Teachers 700
Real World Research, 5th Edition 680
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5685045
求助须知:如何正确求助?哪些是违规求助? 5040038
关于积分的说明 15185849
捐赠科研通 4844104
什么是DOI,文献DOI怎么找? 2597110
邀请新用户注册赠送积分活动 1549690
关于科研通互助平台的介绍 1508176