计算机科学
情态动词
知识图
图形
情报检索
理论计算机科学
材料科学
高分子化学
作者
Xi Chen,Yuehai Wang,Jianyi Yang
标识
DOI:10.1109/iccc59590.2023.10507494
摘要
In recommendation tasks, data sparsity is a key problem that restricts the improvement of recommendation performance. Researchers usually use supplementary information, such as item attributes and user profiles, to alleviate the sparsity. In this paper, we consider both multi-modal features and knowledge graph as the supplementary information. The multimodal features can provide richer item properties from multiple perspectives, and from the knowledge graph, we can find latent connections between users and items. To fully utilize and combine these two kinds of supplementary information, we propose an end-to-end framework called MRKG. MRKG can adequately fuse the features of image and text modalities to capture users' multi-modal preferences for items. And then the preferences are propagated and aggregated across the knowledge graph to obtain higher-order user representations for recommendation. We evaluate our method on MovieLens dataset. Experiments show that our model achieves the best performance compared to other recommendation models on both AUC and ACC metrics. In addition, we explore the influence of different multi-modal fusion methods though ablation study.
科研通智能强力驱动
Strongly Powered by AbleSci AI