域适应
中心(范畴论)
班级(哲学)
计算机科学
领域(数学分析)
断层(地质)
适应(眼睛)
人工智能
控制理论(社会学)
拓扑(电路)
算法
数学
数学分析
地质学
物理
组合数学
地震学
化学
控制(管理)
分类器(UML)
光学
结晶学
作者
Zhiwu Shang,Changchao Wu,Fei Liu,Cailu Pan,Hongchuan Cheng
标识
DOI:10.1088/1361-6501/ad6c74
摘要
Abstract Most of the current domain adaptation research mainly focuses on the single-source or multi-source domain transfer constructed under different working conditions of the same machine. In the face of cross-machine tasks with significant domain discrepancy, forcing the direct feature alignment between the source domain and the target domain samples may lead to negative transfer and reduce the diagnostic performance of the model. In order to overcome the above limitations, this paper proposes a multi-source deep transfer model based on center-weighted optimal transport and class-level alignment domain adaptation (CWOT-CLADA). Firstly, to enhance the ability to represent deep features, a multi-structure feature representation network is constructed to enrich the information capacity embedded in deep features to obtain better domain adaptation ability. Then, the local maximum mean discrepancy is introduced to mine the fine-grained information fully and discriminative features between different source domains, minimize the distribution discrepancy between each source domain, and then capture reliable and generalized multi-source domain invariant features. On this basis, a center-weighted multi-source optimal transport strategy is designed, which comprehensively considers the transport cost of intra-domain uncertainty and inter-domain correlation between samples. More effective transport is established between the source domain and the target domain, which alleviates the negative transfer problem of samples and improves the cross-machine diagnosis performance of the model. Finally, the case study of multiple cross-machine transfer diagnosis tasks proves that the diagnostic accuracy and fault transfer ability of the proposed method are better than the existing domain adaptation methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI