联合学习
计算机科学
学习迁移
空格(标点符号)
面子(社会学概念)
人工智能
样品(材料)
特征(语言学)
共享空间
传输(计算)
语言学
色谱法
操作系统
哲学
社会学
并行计算
化学
社会科学
作者
Qiang Yang,Yang Liu,Yong Cheng,Yan Kang,Tianjian Chen,Han Yu
出处
期刊:Synthesis Lectures on Artificial Intelligence and Machine Learning
[Morgan & Claypool]
日期:2020-01-01
卷期号:: 83-93
被引量:7
标识
DOI:10.1007/978-3-031-01585-4_6
摘要
We have discussed horizontal federated learning (HFL) and vertical federated learning (VFL) in Chapters 4 and 5, respectively. HFL requires all participating parties share the same feature space while VFL require parties share the same sample space. In practice, however, we often face situations in which there are not enough shared features or samples among the participating parties. In those cases, one can still build a federated learning model combined with transfer learning that transfers knowledge among the parties to achieve better performance. We refer to the combination of federated learning and transfer learning as Federated Transfer Learning (FTL). In this chapter, we provide a formal definition of FTL and discuss the differences between FTL and traditional transfer learning. We then introduce a secure FTL framework proposed in Liu et al. [2019], and conclude this chapter with a summary of the challenges and open issues.
科研通智能强力驱动
Strongly Powered by AbleSci AI