人工智能
计算机科学
推论
多标签分类
模棱两可
班级(哲学)
机器学习
模式识别(心理学)
分布(数学)
数学
数学分析
程序设计语言
作者
Ning Xu,Jun Shu,Renyi Zheng,Xin Geng,Deyu Meng,Min-Ling Zhang
标识
DOI:10.1109/tpami.2022.3203678
摘要
Multi-label learning focuses on the ambiguity at the label side, i.e., one instance is associated with multiple class labels, where the logical labels are always adopted to partition class labels into relevant labels and irrelevant labels rigidly. However, the relevance or irrelevance of each label corresponding to one instance is essentially relative in real-world tasks and the label distribution is more fine-grained than the logical labels by denoting one instance with a certain number of the description degrees of all class labels. As the label distribution is not explicitly available in most training sets, a process named label enhancement emerges to recover the label distributions in training datasets. By inducing the generative model of the label distribution and adopting the variational inference technique, the approximate posterior density of the label distributions should maximize the variational lower bound. Following the above consideration, LEVI is proposed to recover the label distributions from the training examples. In addition, the multi-label predictive model is induced for multi-label learning by leveraging the recovered label distributions along with a specialized objective function. The recovery experiments on fourteen label distribution datasets and the predictive experiments on fourteen multi-label learning datasets validate the advantage of our approach over the state-of-the-art approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI