散列函数
计算机科学
通用哈希
动态完美哈希
双重哈希
模态(人机交互)
特征哈希
哈希表
局部敏感散列
理论计算机科学
成对比较
图像(数学)
与K无关的哈希
人工智能
模式识别(心理学)
算法
计算机安全
作者
Yimu Wang,Bo Xue,Quan Cheng,Yuhui Chen,Lijun Zhang
标识
DOI:10.24963/ijcai.2021/156
摘要
With the increasing amount of multimedia data, cross-modality hashing has made great progress as it achieves sub-linear search time and low memory space. However, due to the huge discrepancy between different modalities, most existing cross-modality hashing methods cannot learn unified hash codes and functions for modalities at the same time. The gap between separated hash codes and functions further leads to bad search performance. In this paper, to address the issues above, we propose a novel end-to-end Deep Unified Cross-Modality Hashing method named DUCMH, which is able to jointly learn unified hash codes and unified hash functions by alternate learning and data alignment. Specifically, to reduce the discrepancy between image and text modalities, DUCMH utilizes data alignment to learn an auxiliary image to text mapping under the supervision of image-text pairs. For text data, hash codes can be obtained by unified hash functions, while for image data, DUCMH first maps images to texts by the auxiliary mapping, and then uses the mapped texts to obtain hash codes. DUCMH utilizes alternate learning to update unified hash codes and functions. Extensive experiments on three representative image-text datasets demonstrate the superiority of our DUCMH over several state-of-the-art cross-modality hashing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI