概率逻辑
计算机科学
启发式
人工智能
机器学习
嵌入
特征(语言学)
集合(抽象数据类型)
数据挖掘
语言学
哲学
程序设计语言
作者
Xiuwen Gong,Dong Yuan,Wei Bao,Fulin Luo
标识
DOI:10.1109/tpami.2022.3228755
摘要
Partially labeled data learning (PLDL), including partial label learning (PLL) and partial multi-label learning (PML), has been widely used in nowadays data science. Researchers attempt to construct different specific models to deal with the different classification tasks for PLL and PML scenarios respectively. The main challenge in training classifiers for PLL and PML is how to deal with ambiguities caused by the noisy false-positive labels in the candidate label set. The state-of-the-art strategy for both scenarios is to perform disambiguation by identifying the ground-truth label(s) directly from the candidate label set, which can be summarized into two categories: 'the identifying method' and 'the embedding method'. However, both kinds of methods are constructed by hand-designed heuristic modeling under considerations like feature/label correlations with no theoretical interpretation. Instead of adopting heuristic or specific modeling, we propose a novel unifying framework called A Unifying Probabilistic Framework for Partially Labeled Data Learning (UPF-PLDL), which is derived from a clear probabilistic formulation, and brings existing research on PLL and PML under one theoretical interpretation with respect to information theory. Furthermore, the proposed UPF-PLDL also unifies 'the identifying method' and 'the embedding method' into one integrated framework, which naturally incorporates the feature and label correlation considerations. Comprehensive experiments on synthetic and real-world datasets for both PLL and PML scenarios clearly demonstrate the superiorities of the derived framework.
科研通智能强力驱动
Strongly Powered by AbleSci AI