计算机科学
任务(项目管理)
人工智能
多任务学习
机器学习
可扩展性
管理
数据库
经济
作者
Haotian Wu,Bowen Xing,Ivor W. Tsang
标识
DOI:10.1145/3583780.3615271
摘要
Multi-task learning (MTL) is a widely adopted machine learning paradigm in recommender systems. However, existing MTL models often suffer from performance degeneration with negative transfer and seesaw phenomena. Some works attempt to alleviate the negative transfer and seesaw issues by separating task-specific and shared experts to mitigate the harmful interference between task-specific and shared knowledge. Despite the success of these efforts, task-specific and shared knowledge have still not been thoroughly decoupled. There may still exist unnecessary mixture between the shared and task-specific knowledge, which may harm MLT models' performances. To tackle this problem, in this paper, we propose multi-task knowledge disentanglement network (MTKDN) to further reduce harmful interference between the shared and task-specific knowledge. Specifically, we propose a novel contrastive disentanglement mechanism to explicitly decouple the shared and task-specific knowledge in corresponding hidden spaces. In this way, the unnecessary mixture between shared and task-specific knowledge can be reduced. As for optimization objectives, we propose individual optimization objectives for shared and task-specific experts, by which we can encourage these two kinds of experts to focus more on extracting the shared and task-specific knowledge, respectively. Additionally, we propose a margin regularization to ensure that the fusion of shared and task-specific knowledge can outperform exploiting either of them alone. We conduct extensive experiments on open-source large-scale recommendation datasets. The experimental results demonstrate that MTKDN significantly outperforms state-of-the-art MTL models. In addition, the ablation experiments further verify the necessity of our proposed contrastive disentanglement mechanism and the novel loss settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI