计算机科学
散列函数
水准点(测量)
人工智能
情态动词
排名(信息检索)
机器学习
模式
模式识别(心理学)
数据挖掘
社会科学
化学
计算机安全
大地测量学
社会学
高分子化学
地理
作者
Mingyue Su,Guanghua Gu,Xianlong Ren,Hao Fu,Yao Zhao
标识
DOI:10.1109/tmm.2021.3129623
摘要
Deep hashing methods have achieved tremendous success in cross-modal retrieval, due to its low storage consumption and fast retrieval speed. Supervised cross-modal hashing methods have achieved substantial advancement by incorporating semantic information. However, to a great extent, supervised methods rely on large-scale labeled cross-modal training data which are laborious to obtain. Moreover, most cross-modal hashing methods only handle two modalities of image and text, without taking the scene of multiple modalities into consideration. In this paper, we propose a novel semi-supervised approach called semi-supervised knowledge distillation for cross-modal hashing (SKDCH) to overcome the above-mentioned challenges, which enables guiding a supervised method using outputs produced by a semi-supervised method for multimodality retrieval. Specifically, we utilize teacher-student optimization to propagate knowledge. Furthermore, we improves triplet ranking loss to better mitigate the heterogeneity gap, which increases the discriminability of our proposed approach. Extensive experiments executed on two benchmark datasets validate that the proposed SKDCH surpasses the state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI