情态动词
计算机科学
模式
一致性(知识库)
情报检索
分类学(生物学)
领域(数学)
数据科学
匹配(统计)
人工智能
社会科学
化学
植物
统计
数学
社会学
高分子化学
纯数学
生物
作者
Lei Zhu,Tianshi Wang,Fengling Li,Jingjing Li,Zheng Zhang,Heng Tao Shen
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:3
标识
DOI:10.48550/arxiv.2308.14263
摘要
With the exponential surge in diverse multi-modal data, traditional uni-modal retrieval methods struggle to meet the needs of users demanding access to data from various modalities. To address this, cross-modal retrieval has emerged, enabling interaction across modalities, facilitating semantic matching, and leveraging complementarity and consistency between different modal data. Although prior literature undertook a review of the cross-modal retrieval field, it exhibits numerous deficiencies pertaining to timeliness, taxonomy, and comprehensiveness. This paper conducts a comprehensive review of cross-modal retrieval's evolution, spanning from shallow statistical analysis techniques to vision-language pre-training models. Commencing with a comprehensive taxonomy grounded in machine learning paradigms, mechanisms, and models, the paper then delves deeply into the principles and architectures underpinning existing cross-modal retrieval methods. Furthermore, it offers an overview of widely used benchmarks, metrics, and performances. Lastly, the paper probes the prospects and challenges that confront contemporary cross-modal retrieval, while engaging in a discourse on potential directions for further progress in the field. To facilitate the research on cross-modal retrieval, we develop an open-source code repository at https://github.com/BMC-SDNU/Cross-Modal-Retrieval.
科研通智能强力驱动
Strongly Powered by AbleSci AI