合并(版本控制)
域适应
模式
计算机科学
适应(眼睛)
人工智能
心理学
情报检索
神经科学
社会学
社会科学
分类器(UML)
作者
Xinyao Li,Yuke Li,Zhekai Du,Fengling Li,Ke Lü,Jingjing Li
出处
期刊:Cornell University - arXiv
日期:2024-03-11
标识
DOI:10.48550/arxiv.2403.06946
摘要
Large vision-language models (VLMs) like CLIP have demonstrated good zero-shot learning performance in the unsupervised domain adaptation task. Yet, most transfer approaches for VLMs focus on either the language or visual branches, overlooking the nuanced interplay between both modalities. In this work, we introduce a Unified Modality Separation (UniMoS) framework for unsupervised domain adaptation. Leveraging insights from modality gap studies, we craft a nimble modality separation network that distinctly disentangles CLIP's features into language-associated and vision-associated components. Our proposed Modality-Ensemble Training (MET) method fosters the exchange of modality-agnostic information while maintaining modality-specific nuances. We align features across domains using a modality discriminator. Comprehensive evaluations on three benchmarks reveal our approach sets a new state-of-the-art with minimal computational costs. Code: https://github.com/TL-UESTC/UniMoS
科研通智能强力驱动
Strongly Powered by AbleSci AI