计算机科学
互补性(分子生物学)
人工智能
遗传学
生物
作者
Jinkuan Zhu,Pengpeng Zeng,Lianli Gao,Gongfu Li,Dongliang Liao,Jingkuan Song
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2023-01-09
卷期号:33 (8): 4362-4374
被引量:15
标识
DOI:10.1109/tcsvt.2023.3235523
摘要
In general, videos are powerful at recording physical patterns (e.g., spatial layout) while texts are great at describing abstract symbols (e.g., emotion). When video and text are used in multi-modal tasks, they are claimed to be complementary and their distinct information is crucial. However, when it comes to cross-modal tasks (e.g., retrieval), existing works usually use their common part in the form of common space learning while their distinct information is abandoned. In this paper, we argue that distinct information is also beneficial for cross-modal retrieval. To address this problem, we propose a divide-and-conquer learning approach, namely Complementarity-aware Space Learning (CSL), by recasting this challenge into learning of two spaces (i.e., latent and symbolic spaces) to simultaneously explore their common and distinct information by considering multi-modal complementary character. Specifically, we first propose to learn a symbolic space from video with a memory-based video encoder and a symbolic generator. In contrast, we also introduce learning a latent space from text with a text encoder and a memory-based latent feature selector. Finally, we propose a complementarity-aware loss by integrating two spaces to facilitate video-text retrieval tasks. Extensive experiments show that our approach outperforms existing state-of-the-art methods by 5.1%, 2.1% and 0.9% of R@10 for text-to-video retrieval on three benchmarks, respectively. Ablation study also verifies that the distinct information from video and text improves the retrieval performance. Trained models and source code have been released at https://github.com/NovaMind-Z/CSL .
科研通智能强力驱动
Strongly Powered by AbleSci AI