期刊:IEEE Transactions on Circuits and Systems for Video Technology [Institute of Electrical and Electronics Engineers] 日期:2023-10-16卷期号:34 (5): 3409-3423被引量:5
标识
DOI:10.1109/tcsvt.2023.3324648
摘要
Multi-label zero-shot learning (MLZSL) is a more realistic and challenging task than single-label zero-shot learning (SLZSL), which aims to recognize multiple unseen classes in a single image. To adapt generative models to the MLZSL task and better recognize multiple unseen object categories in an image, this paper proposes a Transferable Generative Framework (TGF), which consists of a Multi-Label Semantic Embedding Autoencoders (SEAs), a Semantic-Related Multi-Label Feature Transformation Network (FTN) and a Multi-Label Feature Generation Networks (FGNs). First, SEAs adaptively encodes the class-level word vectors corresponding to each sample containing different number of classes into sample-level semantic embeddings with the same dimension. Then, FTN transforms global features extracted by a CNN pre-trained on single-label images into features that are semantic-related and more suitable for multi-label classification. Finally, FGNs generates both global and local features to better recognize the dominant and minor object categories in a multi-label image, respectively. Extensive experiments on three benchmark datasets show that TGF significantly outperforms state-of-the-arts. Specifically, compared with the previous best generative MLZSL method ( i.e ., Gen-MLZSL), TGF improves the mAP of the ZSL (GZSL) task by 5.4% (6.9%), 20.5% (27.9%), and 2.4% (3.9%) on NUS-WIDE, Open Images, and MS-COCO datasets, respectively.