计算机科学
多任务学习
人工智能
任务(项目管理)
机器学习
背景(考古学)
对抗制
人工神经网络
深度学习
生物
古生物学
经济
管理
作者
Alexander Mattick,Martin Mayr,Andreas Maier,Vincent Christlein
标识
DOI:10.1007/978-3-031-06555-2_45
摘要
Multitask learning has been a common technique for improving representations learned by artificial neural networks for decades. However, the actual effects and trade-offs are not much explored, especially in the context of document analysis. We demonstrate a simple and realistic scenario on real-world datasets that produces noticeably inferior results in a multitask learning setting than in a single-task setting. We hypothesize that slight data-manifold and task semantic shifts are sufficient to lead to adversarial competition of tasks inside networks and demonstrate this experimentally in two different multitask learning formulations.
科研通智能强力驱动
Strongly Powered by AbleSci AI