计算机科学
二进制代码
人工智能
无监督学习
散列函数
模式识别(心理学)
杠杆(统计)
图像检索
量化(信号处理)
机器学习
二进制数
数据挖掘
图像(数学)
算法
数学
计算机安全
算术
作者
Sheng Jin,Hongxun Yao,Qin Zhou,Yao Liu,Jianqiang Huang,Xian–Sheng Hua
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:30: 6130-6141
被引量:11
标识
DOI:10.1109/tip.2021.3091895
摘要
In recent years, supervised hashing has been validated to greatly boost the performance of image retrieval. However, the label-hungry property requires massive label collection, making it intractable in practical scenarios. To liberate the model training procedure from laborious manual annotations, some unsupervised methods are proposed. However, the following two factors make unsupervised algorithms inferior to their supervised counterparts: (1) Without manually-defined labels, it is difficult to capture the semantic information across data, which is of crucial importance to guide robust binary code learning. (2) The widely adopted relaxation on binary constraints results in quantization error accumulation in the optimization procedure. To address the above-mentioned problems, in this paper, we propose a novel Unsupervised Discrete Hashing method (UDH). Specifically, to capture the semantic information, we propose a balanced graph-based semantic loss which explores the affinity priors in the original feature space. Then, we propose a novel self-supervised loss, termed orthogonal consistent loss, which can leverage semantic loss of instance and impose independence of codes. Moreover, by integrating the discrete optimization into the proposed unsupervised framework, the binary constraints are consistently preserved, alleviating the influence of quantization errors. Extensive experiments demonstrate that UDH outperforms state-of-the-art unsupervised methods for image retrieval.
科研通智能强力驱动
Strongly Powered by AbleSci AI