嵌入
匹配(统计)
计算机科学
张量(固有定义)
排名(信息检索)
人工智能
模式识别(心理学)
情态动词
图像(数学)
秩(图论)
特征(语言学)
相似性(几何)
特征向量
数学
语言学
统计
化学
哲学
组合数学
高分子化学
纯数学
作者
Tan Wang,Xing Xu,Yang Yang,Alan Hanjalić,Heng Tao Shen,Jingkuan Song
标识
DOI:10.1145/3343031.3350875
摘要
A major challenge in matching images and text is that they have intrinsically different data distributions and feature representations. Most existing approaches are based either on embedding or classification, the first one mapping image and text instances into a common embedding space for distance measuring, and the second one regarding image-text matching as a binary classification problem. Neither of these approaches can, however, balance the matching accuracy and model complexity well. We propose a novel framework that achieves remarkable matching performance with acceptable model complexity. Specifically, in the training stage, we propose a novel Multi-modal Tensor Fusion Network (MTFN) to explicitly learn an accurate image-text similarity function with rank-based tensor fusion rather than seeking a common embedding space for each image-text instance. Then, during testing, we deploy a generic Cross-modal Re-ranking (RR) scheme for refinement without requiring additional training procedure. Extensive experiments on two datasets demonstrate that our MTFN-RR consistently achieves the state-of-the-art matching performance with much less time complexity.
科研通智能强力驱动
Strongly Powered by AbleSci AI