CrossFormer: Cross-modal Representation Learning via Heterogeneous Graph Transformer

计算机科学 情态动词 变压器 图形 代表(政治) 理论计算机科学 人工智能 电压 化学 物理 量子力学 政治 政治学 高分子化学 法学
作者
Xiao Liang,Erkun Yang,Cheng Deng,Yanhua Yang
出处
期刊:ACM Transactions on Multimedia Computing, Communications, and Applications [Association for Computing Machinery]
被引量:4
标识
DOI:10.1145/3688801
摘要

Transformers have been recognized as powerful tools for various cross-modal tasks due to their superior ability to perform representation learning through self-attention. Existing transformer-based cross-modal models can be categorized into single-stream and dual-stream ones. By performing fine-grained interaction with self-attention on the cross-modal concatenated features, the former can simultaneously learn intra- and inter-modal correlations. However, this simple concatenation treats the inputs of different modalities equally; as a result, the heterogeneous differences between modalities are ignored, leading to a modality gap. The latter process the inputs of different modalities separately, then perform cross-modal interaction on the subsequently fused networks, resulting in a failure to integrate the fine-grained correlations of both intra- and inter-modality in a uniform module. To this end, we propose an effective heterogeneous graph transformer for dual-stream cross-modal representation learning, named CrossFormer, which constructs a heterogeneous graph as a bridge to achieve fine-grained intra- and inter-modal interaction on a dual-stream network. Specifically, we first represent multi-modal data with a heterogeneous graph, then develop a dual-positional encoding strategy that enables the heterogeneous graph to obtain the relative positional information. Finally, a dual-stream self-attention is performed on the heterogeneous graph, bridging the gap between modalities and effectively capturing fine-grained intra- and inter-modal interactions simultaneously. Extensive experiments on various cross-modal tasks demonstrate the superiority of our method.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
隐形曼青应助冷酷的夏菡采纳,获得10
刚刚
2秒前
NexusExplorer应助HaonanZhang采纳,获得20
2秒前
千道完成签到,获得积分10
2秒前
www发布了新的文献求助10
4秒前
猪嗝铁铁发布了新的文献求助10
4秒前
Hello应助之之采纳,获得20
4秒前
Hazel完成签到,获得积分10
5秒前
6秒前
可爱的函函应助完美易文采纳,获得10
7秒前
吕lvlvlvlvlv完成签到,获得积分10
8秒前
8秒前
YinchuanSun发布了新的文献求助10
9秒前
哈哈哈完成签到,获得积分10
12秒前
三木完成签到,获得积分10
12秒前
12秒前
12秒前
12秒前
大模型应助万物更始采纳,获得10
13秒前
小樊同学发布了新的文献求助10
14秒前
脑洞疼应助www采纳,获得10
14秒前
Brooks完成签到,获得积分10
15秒前
个性小海豚完成签到,获得积分10
17秒前
zxy完成签到,获得积分10
17秒前
xc发布了新的文献求助10
17秒前
17秒前
wanci应助请风再拂面采纳,获得10
18秒前
18秒前
领导范儿应助mall采纳,获得10
19秒前
02发布了新的文献求助10
19秒前
doujiang发布了新的文献求助20
19秒前
20秒前
Jasper应助研墨采纳,获得10
20秒前
Tom完成签到,获得积分0
21秒前
共享精神应助xiao采纳,获得10
23秒前
xudingdong发布了新的文献求助10
25秒前
之之发布了新的文献求助20
25秒前
天天快乐应助逢亮采纳,获得10
25秒前
星期八完成签到,获得积分10
29秒前
没所谓完成签到,获得积分10
29秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
PowerCascade: A Synthetic Dataset for Cascading Failure Analysis in Power Systems 2000
The Composition and Relative Chronology of Dynasties 16 and 17 in Egypt 1500
Picture this! Including first nations fiction picture books in school library collections 1500
Signals, Systems, and Signal Processing 610
Unlocking Chemical Thinking: Reimagining Chemistry Teaching and Learning 555
Rheumatoid arthritis drugs market analysis North America, Europe, Asia, Rest of world (ROW)-US, UK, Germany, France, China-size and Forecast 2024-2028 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6366041
求助须知:如何正确求助?哪些是违规求助? 8179983
关于积分的说明 17243873
捐赠科研通 5420779
什么是DOI,文献DOI怎么找? 2868231
邀请新用户注册赠送积分活动 1845373
关于科研通互助平台的介绍 1692871