多标签分类
融合
人工智能
计算机科学
模式识别(心理学)
哲学
语言学
作者
Gengyu Lyu,Zhen Yang,Xiang Deng,Songhe Feng
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-15
标识
DOI:10.1109/tnnls.2024.3390776
摘要
In the task of multiview multilabel (MVML) classification, each instance is represented by several heterogeneous features and associated with multiple semantic labels. Existing MVML methods mainly focus on leveraging the shared subspace to comprehensively explore multiview consensus information across different views, while it is still an open problem whether such shared subspace representation is effective to characterize all relevant labels when formulating a desired MVML model. In this article, we propose a novel label-driven view-specific fusion MVML method named L-VSM, which bypasses seeking for a shared subspace representation and instead directly encodes the feature representation of each individual view to contribute to the final multilabel classifier induction. Specifically, we first design a label-driven feature graph construction strategy and construct all instances under various feature representations into the corresponding feature graphs. Then, these view-specific feature graphs are integrated into a unified graph by linking the different feature representations within each instance. Afterward, we adopt a graph attention mechanism to aggregate and update all feature nodes on the unified graph to generate structural representations for each instance, where both intraview correlations and interview alignments are jointly encoded to discover the underlying consensuses and complementarities across different views. Moreover, to explore the widespread label correlations in multilabel learning (MLL), the transformer architecture is introduced to construct a dynamic semantic-aware label graph and accordingly generate structural semantic representations for each specific class. Finally, we derive an instance-label affinity score for each instance by averaging the affinity scores of its different feature representations with the multilabel soft margin loss. Extensive experiments on various MVML applications have verified that our proposed L-VSM has achieved superior performance against state-of-the-art methods. The codes are available at https://gengyulyu.github.io/homepage/assets/codes/LVSM.zip.
科研通智能强力驱动
Strongly Powered by AbleSci AI