亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

ParaUDA: Invariant Feature Learning With Auxiliary Synthetic Samples for Unsupervised Domain Adaptation

人工智能 计算机科学 特征学习 特征(语言学) 学习迁移 模式识别(心理学) 像素 不变(物理) 领域(数学分析) 域适应 目标检测 计算机视觉 视觉对象识别的认知神经科学 光学(聚焦) 特征提取 数学 分类器(UML) 哲学 数学分析 物理 光学 语言学 数学物理
作者
Wenwen Zhang,Jiangong Wang,Yutong Wang,Fei–Yue Wang
出处
期刊:IEEE Transactions on Intelligent Transportation Systems [Institute of Electrical and Electronics Engineers]
卷期号:23 (11): 20217-20229 被引量:6
标识
DOI:10.1109/tits.2022.3176397
摘要

Recognizing and locating objects by algorithms are essential and challenging issues for Intelligent Transportation Systems. However, the increasing demand for much labeled data hinders the further application of deep learning-based object detection. One of the optimal solutions is to train the target model with an existing dataset and then adapt it to new scenes, namely Unsupervised Domain Adaptation (UDA). However, most of existing methods at the pixel level mainly focus on adapting the model from source domain to target domain and ignore the essence of UDA to learn domain-invariant feature learning. Meanwhile, almost all methods at the feature level ignore to make conditional distributions matched for UDA while conducting feature alignment between source and target domain. Considering these problems, this paper proposes the ParaUDA, a novel framework of learning invariant representations for UDA in two aspects: pixel level and feature level. At the pixel level, we adopt CycleGAN to conduct domain transfer and convert the problem of original unsupervised domain adaptation to supervised domain adaptation. At the feature level, we adopt an adversarial adaption model to learn domain-invariant representation by aligning the distributions of domains between different image pairs with same mixture distributions. We evaluate our proposed framework in different scenes, from synthetic scenes to real scenes, from normal weather to challenging weather, and from scenes across cameras. The results of all the above experiments show that ParaUDA is effective and robust for adapting object detection models from source scenes to target scenes.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
务实书包完成签到,获得积分10
3秒前
糟糕的颜完成签到 ,获得积分10
4秒前
深情安青应助哎呦魏采纳,获得10
7秒前
11秒前
12秒前
传奇3应助混子采纳,获得10
14秒前
ding应助混子采纳,获得10
14秒前
斯文败类应助混子采纳,获得10
14秒前
汉堡包应助混子采纳,获得10
14秒前
彭于晏应助混子采纳,获得10
14秒前
完美世界应助混子采纳,获得10
14秒前
JamesPei应助混子采纳,获得10
15秒前
丘比特应助混子采纳,获得10
15秒前
汉堡包应助混子采纳,获得10
15秒前
微笑二娘发布了新的文献求助10
17秒前
西门迎天发布了新的文献求助10
18秒前
19秒前
美丽的安完成签到,获得积分10
20秒前
香蕉觅云应助混子采纳,获得10
22秒前
CodeCraft应助混子采纳,获得10
22秒前
李健应助混子采纳,获得10
22秒前
烟花应助混子采纳,获得10
22秒前
慕青应助混子采纳,获得10
22秒前
Owen应助混子采纳,获得10
22秒前
22秒前
善学以致用应助混子采纳,获得10
22秒前
科研通AI6.2应助混子采纳,获得10
22秒前
科研通AI6.3应助混子采纳,获得10
22秒前
哎呦魏发布了新的文献求助10
24秒前
科研通AI6.4应助混子采纳,获得10
29秒前
可爱的函函应助混子采纳,获得10
29秒前
天天快乐应助混子采纳,获得10
29秒前
李爱国应助混子采纳,获得10
29秒前
JamesPei应助混子采纳,获得10
29秒前
科研通AI6.1应助混子采纳,获得10
29秒前
香蕉觅云应助混子采纳,获得10
30秒前
汉堡包应助混子采纳,获得10
30秒前
科研通AI6.2应助混子采纳,获得10
30秒前
科研通AI6.3应助混子采纳,获得10
30秒前
传奇3应助混子采纳,获得10
37秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Handbook of pharmaceutical excipients, Ninth edition 5000
Aerospace Standards Index - 2026 ASIN2026 3000
Relation between chemical structure and local anesthetic action: tertiary alkylamine derivatives of diphenylhydantoin 1000
Signals, Systems, and Signal Processing 610
Discrete-Time Signals and Systems 610
Principles of town planning : translating concepts to applications 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6065840
求助须知:如何正确求助?哪些是违规求助? 7898175
关于积分的说明 16322397
捐赠科研通 5208148
什么是DOI,文献DOI怎么找? 2786256
邀请新用户注册赠送积分活动 1768979
关于科研通互助平台的介绍 1647792