计算机科学
汉明空间
散列函数
人工智能
图像检索
卷积神经网络
深度学习
模式识别(心理学)
图像(数学)
汉明码
算法
解码方法
计算机安全
区块代码
作者
Yuxi Sun,Shanshan Feng,Yunming Ye,Xutao Li,Jian Kang,Zhichao Huang,Chuyao Luo
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:60: 1-14
被引量:12
标识
DOI:10.1109/tgrs.2021.3136641
摘要
Cross-modal hashing is an important tool for retrieving useful information from very-high-resolution (VHR) optical images and synthetic aperture radar (SAR) images. Dealing with the intermodal discrepancies, including both spatial–spectral and visual semantic aspects, between VHR and SAR images is extremely vital to generate high-quality common hash codes in the Hamming space. However, existing cross-modal hashing methods ignore the spatial–spectral discrepancy when representing VHR and SAR images. Moreover, existing methods employ derived supervised signals, such as pairwise training images, to implicitly guide hashing learning, which fails to effectively deal with the visual semantic discrepancy, i.e., cannot adequately preserve the intraclass similarity and interclass discrimination between VHR and SAR images. To address these drawbacks, this article proposes a multisensor fusion and explicit semantic preserving-based deep Hashing method, termed as MsEspH, which can effectively deal with the discrepancies. Specifically, we design a novel cross-modal hashing network to eliminate the spatial–spectral discrepancies by fusing extra multispectral images (MSIs), which are generated in real time by a generative adversarial network. Then, we propose an explicit semantic preserving-based objective function by analyzing the connection between classification and hash learning. The objective function can preserve the intraclass similarity and interclass discrimination with class labels directly. Moreover, we theoretically verify that hash learning and classification can be unified into a learning framework under certain conditions. To evaluate our method, we construct and release a large-scale VHR-SAR image dataset. Extensive experiments on the dataset demonstrate that our method outperforms various state-of-the-art cross-modal hashing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI