计算机科学
分类器(UML)
人工智能
机器学习
推论
特征学习
域适应
不变(物理)
模式识别(心理学)
数据挖掘
数学
数学物理
作者
Yu Liu,Duantengchuan Li,Jian Wang,Bing Li,Bo Hang
标识
DOI:10.1016/j.ipm.2023.103638
摘要
Unsupervised Time Series Domain Adaptation (UTSDA) is a method for transferring information from a labeled source domain to an unlabeled target domain. The majority of existing UTSDA approaches focus on learning a domain-invariant feature space by reducing the gap between domains. However, the single-task representation learning methods have limited expressive capability, while ignoring the distinctive season-related and trend-related domain-invariant mechanisms across different domains. To address this, we introduce a novel approach, distinct from existing methods, through a theoretical analysis of UTSDA from the perspective of causal inference. This analysis establishes a solid theoretical foundation for identifying and modeling such consistent domain-invariant mechanisms, which is a significant advancement in the field. As a solution, we introduce MDLR, a multi-task disentangled learning framework designed for UTSDA. MDLR utilizes a dual-tower architecture with a trend feature extractor (TFE) and a season feature extractor (SFE) to extract trend-related and season-related information. This approach ensures that domain-invariant features at different scales can be better represented. Additionally, MDLR is designed with two tasks: a label classifier and a domain classifier, enabling iterative training of the entire model. The experiments conducted on three datasets, namely UCIHAR, WISDM, and HHAR_SA, along with visualization results, have shown the effectiveness of the proposed approach. The source code for our MDLR model is available to the public at https://github.com/MoranCoder95/MDLR/.
科研通智能强力驱动
Strongly Powered by AbleSci AI