计算机科学
偏爱
模态(人机交互)
推荐系统
人工智能
对偶(语法数字)
空格(标点符号)
机器学习
代表(政治)
任务(项目管理)
情报检索
人机交互
自然语言处理
管理
政治
政治学
法学
经济
微观经济学
艺术
文学类
操作系统
作者
Jie Guo,Longyu Wen,Yan Zhou,Bin Song,Yuhao Chi,F. Richard Yu
标识
DOI:10.1109/tmm.2024.3382889
摘要
Multimodal recommendation is an emerging task with the goal of improving the effectiveness of the recommendation system by utilizing multimodal data (images, texts, etc.). Most previous methods have struggled with the ability to mine item semantic relationships while guaranteeing accurate modeling of user modality preferences, resulting in low recommendation accuracy. To address this issue, this paper proposes a novel and effective Self-suPervised duAl preference enhanCing nEtwork for multimodal recommendation, named SPACE, which further mines user preferences towards historical interactions and multimodal features of items to obtain more precise user and item representation. Specifically, we design an interaction preference enhancing module to learn both interactive and latent semantic relationships between users and items. Then, a modality preference enhancing module is established by introducing self-supervised learning (SSL), which aims to strengthen the role of dominant modality-specific representation of items. Finally, the enhanced interaction and modality representations are fused, and the recommendation performance is largely improved by utilizing dual joint prediction. Extensive experiments are conducted on three real-world datasets, and the simulation results demonstrate that the proposed SPACE model outperforms the state-of-the-art multimodal recommendation methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI