一般化
领域(数学分析)
集合(抽象数据类型)
模式(计算机接口)
计算机科学
分布(数学)
人工智能
算法
理论计算机科学
数学
数学分析
程序设计语言
操作系统
作者
Rui Dai,Yonggang Zhang,Zhen Fang,Bo Han,Xinmei Tian
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:6
标识
DOI:10.48550/arxiv.2304.13976
摘要
Domain generalization (DG) aims to tackle the distribution shift between training domains and unknown target domains. Generating new domains is one of the most effective approaches, yet its performance gain depends on the distribution discrepancy between the generated and target domains. Distributionally robust optimization is promising to tackle distribution discrepancy by exploring domains in an uncertainty set. However, the uncertainty set may be overwhelmingly large, leading to low-confidence prediction in DG. It is because a large uncertainty set could introduce domains containing semantically different factors from training domains. To address this issue, we propose to perform a $\textbf{mo}$derately $\textbf{d}$istributional $\textbf{e}$xploration (MODE) for domain generalization. Specifically, MODE performs distribution exploration in an uncertainty $\textit{subset}$ that shares the same semantic factors with the training domains. We show that MODE can endow models with provable generalization performance on unknown target domains. The experimental results show that MODE achieves competitive performance compared to state-of-the-art baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI