计算机科学
人工智能
模式识别(心理学)
合并(版本控制)
机器学习
上下文图像分类
多标签分类
图像(数学)
情报检索
作者
Yuhan Zhang,Luyang Luo,Qi Dou,Pheng‐Ann Heng
标识
DOI:10.1016/j.media.2023.102772
摘要
Multi-label classification (MLC) can attach multiple labels on single image, and has achieved promising results on medical images. But existing MLC methods still face challenging clinical realities in practical use, such as: (1) medical risks arising from misclassification, (2) sample imbalance problem among different diseases, (3) inability to classify the diseases that are not pre-defined (unseen diseases). Here, we design a hybrid label to improve the flexibility of MLC methods and alleviate the sample imbalance problem. Specifically, in the labeled training set, we remain independent labels for high-frequency diseases with enough samples and use a hybrid label to merge low-frequency diseases with fewer samples. The hybrid label can also be used to put unseen diseases in practical use. In this paper, we propose Triplet Attention and Dual-pool Contrastive Learning (TA-DCL) for multi-label medical image classification based on the aforementioned label representation. TA-DCL architecture is a triplet attention network (TAN), which combines category-attention, self-attention and cross-attention together to learn high-quality label embeddings for all disease labels by mining effective information from medical images. DCL includes dual-pool contrastive training (DCT) and dual-pool contrastive inference (DCI). DCT optimizes the clustering centers of label embeddings belonging to different disease labels to improve the discrimination of label embeddings. DCI relieves the error classification of sick cases for reducing the clinical risk and improving the ability to detect unseen diseases by contrast of differences. TA-DCL is validated on two public medical image datasets, ODIR and NIH-ChestXray14, showing superior performance than other state-of-the-art MLC methods. Code is available at https://github.com/ZhangYH0502/TA-DCL.
科研通智能强力驱动
Strongly Powered by AbleSci AI