计算机科学
遗忘
任务(项目管理)
人工智能
领域(数学分析)
分割
多样性(控制论)
机器学习
域适应
适应(眼睛)
班级(哲学)
领域知识
自然语言处理
数学分析
物理
数学
管理
分类器(UML)
光学
经济
哲学
语言学
作者
Marco Toldo,Umberto Michieli,Pietro Zanuttigh
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:3
标识
DOI:10.48550/arxiv.2210.07016
摘要
Deep learning models dealing with image understanding in real-world settings must be able to adapt to a wide variety of tasks across different domains. Domain adaptation and class incremental learning deal with domain and task variability separately, whereas their unified solution is still an open problem. We tackle both facets of the problem together, taking into account the semantic shift within both input and label spaces. We start by formally introducing continual learning under task and domain shift. Then, we address the proposed setup by using style transfer techniques to extend knowledge across domains when learning incremental tasks and a robust distillation framework to effectively recollect task knowledge under incremental domain shift. The devised framework (LwS, Learning with Style) is able to generalize incrementally acquired task knowledge across all the domains encountered, proving to be robust against catastrophic forgetting. Extensive experimental evaluation on multiple autonomous driving datasets shows how the proposed method outperforms existing approaches, which prove to be ill-equipped to deal with continual semantic segmentation under both task and domain shift.
科研通智能强力驱动
Strongly Powered by AbleSci AI