计算机科学
人工智能
图形
特征(语言学)
特征学习
机器学习
传感器融合
语义特征
模态(人机交互)
理论计算机科学
语言学
哲学
作者
Yan Zhou,Jie Guo,Hao Sun,Bin Song,F. Richard Yu
标识
DOI:10.1145/3539618.3591950
摘要
The main idea of multimodal recommendation is the rational utilization of the item's multimodal information to improve the recommendation performance. Previous works directly integrate item multimodal features with item ID embeddings, ignoring the inherent semantic relations contained in the multimodal features. In this paper, we propose a novel and effective aTtention-guided Multi-step FUsion Network for multimodal recommendation, named TMFUN. Specifically, our model first constructs modality feature graph and item feature graph to model the latent item-item semantic structures. Then, we use the attention module to identify inherent connections between user-item interaction data and multimodal data, evaluate the impact of multimodal data on different interactions, and achieve early-step fusion of item features. Furthermore, our model optimizes item representation through the attention-guided multi-step fusion strategy and contrastive learning to improve recommendation performance. The extensive experiments on three real-world datasets show that our model has superior performance compared to the state-of-the-art models.
科研通智能强力驱动
Strongly Powered by AbleSci AI