计算机科学
人工智能
模态(人机交互)
分割
互补性(分子生物学)
机器学习
域适应
利用
领域(数学分析)
模式识别(心理学)
分类器(UML)
数学
计算机安全
遗传学
生物
数学分析
作者
Yachao Zhang,Miaoyu Li,Yuan Xie,Cuihua Li,Cong Wang,Zhizhong Zhang,Yanyun Qu
标识
DOI:10.1145/3503161.3547987
摘要
2D-3D unsupervised domain adaptation (UDA) tackles the lack of annotations in a new domain by capitalizing the relationship between 2D and 3D data. Existing methods achieve considerable improvements by performing cross-modality alignment in a modality-agnostic way, failing to exploit modality-specific characteristic for modeling complementarity. In this paper, we present self-supervised exclusive learning for cross-modal semantic segmentation under the UDA scenario, which avoids the prohibitive annotation. Specifically, two self-supervised tasks are designed, named "plane-to-spatial'' and "discrete-to-textured''. The former helps the 2D network branch improve the perception of spatial metrics, and the latter supplements structured texture information for the 3D network branch. In this way, modality-specific exclusive information can be effectively learned, and the complementarity of multi-modality is strengthened, resulting in a robust network to different domains. With the help of the self-supervised tasks supervision, we introduce a mixed domain to enhance the perception of the target domain by mixing the patches of the source and target domain samples. Besides, we propose a domain-category adversarial learning with category-wise discriminators by constructing the category prototypes for learning domain-invariant features. We evaluate our method on various multi-modality domain adaptation settings, where our results significantly outperform both uni-modality and multi-modality state-of-the-art competitors.
科研通智能强力驱动
Strongly Powered by AbleSci AI