计算机科学
强化学习
任务(项目管理)
学习迁移
人工智能
粒度
负迁移
知识转移
先验与后验
多任务学习
机器学习
知识管理
语言学
认识论
操作系统
哲学
经济
第一语言
管理
作者
Timo Bräm,Gino Brunner,Oliver Richter,Roger Wattenhofer
标识
DOI:10.1007/978-3-030-46133-1_9
摘要
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task/transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.
科研通智能强力驱动
Strongly Powered by AbleSci AI