计算机科学
散列函数
成对比较
情态动词
深度学习
图像检索
水准点(测量)
人工智能
图像(数学)
光学(聚焦)
数据挖掘
模式识别(心理学)
物理
地理
高分子化学
化学
光学
计算机安全
大地测量学
作者
Mikel Williams-Lekuona,Georgina Cosma,Iain Phillips
标识
DOI:10.3390/jimaging8120328
摘要
Cross-Modal Hashing (CMH) retrieval methods have garnered increasing attention within the information retrieval research community due to their capability to deal with large amounts of data thanks to the computational efficiency of hash-based methods. To date, the focus of cross-modal hashing methods has been on training with paired data. Paired data refers to samples with one-to-one correspondence across modalities, e.g., image and text pairs where the text sample describes the image. However, real-world applications produce unpaired data that cannot be utilised by most current CMH methods during the training process. Models that can learn from unpaired data are crucial for real-world applications such as cross-modal neural information retrieval where paired data is limited or not available to train the model. This paper provides (1) an overview of the CMH methods when applied to unpaired datasets, (2) proposes a framework that enables pairwise-constrained CMH methods to train with unpaired samples, and (3) evaluates the performance of state-of-the-art CMH methods across different pairing scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI