计算机科学
学习迁移
传递关系
推论
人工智能
知识转移
组分(热力学)
负迁移
领域(数学分析)
过程(计算)
选择(遗传算法)
机器学习
桥(图论)
弦(物理)
数学
程序设计语言
物理
内科学
数学分析
哲学
热力学
组合数学
医学
第一语言
知识管理
语言学
数学物理
作者
Ben Tan,Yangqiu Song,Erheng Zhong,Qiang Yang
标识
DOI:10.1145/2783258.2783295
摘要
Transfer learning, which leverages knowledge from source domains to enhance learning ability in a target domain, has been proven effective in various applications. One major limitation of transfer learning is that the source and target domains should be directly related. If there is little overlap between the two domains, performing knowledge transfer between these domains will not be effective. Inspired by human transitive inference and learning ability, whereby two seemingly unrelated concepts can be connected by a string of intermediate bridges using auxiliary concepts, in this paper we study a novel learning problem: Transitive Transfer Learning (abbreviated to TTL). TTL is aimed at breaking the large domain distances and transfer knowledge even when the source and target domains share few factors directly. For example, when the source and target domains are documents and images respectively, TTL could use some annotated images as the intermediate domain to bridge them. To solve the TTL problem, we propose a learning framework to mimic the human learning process. The framework is composed of an intermediate domain selection component and a knowledge transfer component. Extensive empirical evidence shows that the framework yields state-of-the-art classification accuracies on several classification data sets.
科研通智能强力驱动
Strongly Powered by AbleSci AI