计算机科学
对抗制
对偶(语法数字)
领域(数学分析)
适应(眼睛)
域适应
人工智能
心理学
艺术
神经科学
文学类
数学分析
数学
分类器(UML)
作者
Hongzu Su,Jingjing Li,Zhekai Du,Lei Zhu,Ke Lü,Hengtao Shen
出处
期刊:ACM Transactions on Information Systems
日期:2023-11-11
卷期号:42 (3): 1-26
被引量:2
摘要
Data scarcity is a perpetual challenge of recommendation systems, and researchers have proposed a variety of cross-domain recommendation methods to alleviate the problem of data scarcity in target domains. However, in many real-world cross-domain recommendation systems, the source domain and the target domain are sampled from different data distributions, which obstructs the cross-domain knowledge transfer. In this article, we propose to specifically align the data distributions between the source domain and the target domain to alleviate imbalanced sample distribution and thus challenge the data scarcity issue in the target domain. Technically, our proposed approach builds a dual adversarial adaptation (DAA) framework to adversarially train the target model together with a pre-trained source model. Two domain discriminators play the two-player minmax game with the target model and guide the target model to learn reliable domain-invariant features that can be transferred across domains. At the same time, the target model is calibrated to learn domain-specific information of the target domain. In addition, we formulate our approach as a plug-and-play module to boost existing recommendation systems. We apply the proposed method to address the issues of insufficient data and imbalanced sample distribution in real-world Click-through Rate/Conversion Rate predictions on two large-scale industrial datasets. We evaluate the proposed method in scenarios with and without overlapping users/items, and extensive experiments verify that the proposed method is able to significantly improve the prediction performance on the target domain. For instance, our method can boost PLE with a performance improvement of 15.4% in terms of Area Under Curve compared with single-domain PLE on our private game dataset. In addition, our method is able to surpass single-domain MMoE by 6.85% on the public datasets. Code: https://github.com/TL-UESTC/DAA .
科研通智能强力驱动
Strongly Powered by AbleSci AI