多任务学习
特征(语言学)
人工智能
计算机科学
机器学习
任务(项目管理)
工程类
哲学
语言学
系统工程
作者
Yuxin Tian,Yijie Lin,Qing Ye,Jian Wang,Xi Peng,Jiancheng Lv
标识
DOI:10.1109/tsmc.2024.3389672
摘要
Existing multitask dense prediction methods typically rely on either global shared neural architecture or cross-task fusion strategy. However, these approaches tend to overlook either potential cross-task complementary or consistent information, resulting in suboptimal results. Motivated by this observation, we propose a novel plug-and-play module to concurrently leverage cross-task consistent and complementary information, thereby capturing a sufficient feature. Specifically, for a given pair of tasks, we compute a cross-task similarity matrix that extracts cross-task consistent features bidirectionally. To integrate the complementary signals from different tasks, we fuse the cross-task consistent features with the corresponding task-specific features using an $1\times 1$ convolution. Extensive experimental results demonstrate the remarkable performance gain of our method on two challenging datasets w.r.t different task sets, compared with seven approaches. Under the two-task setting, our method has achieved 1.63% and 8.32% improvements on NYUD-v2 and PASCAL-Context, respectively. On the three-task setting, we obtain an additional 7.7% multitask performance gain.
科研通智能强力驱动
Strongly Powered by AbleSci AI