计算机科学
Softmax函数
嵌入
卷积神经网络
特征学习
杠杆(统计)
人工智能
二部图
情报检索
图形
模糊逻辑
数据挖掘
机器学习
理论计算机科学
作者
Ni Juan,Zhenhua Huang,Yang Hu,Chen Lin
标识
DOI:10.1016/j.ins.2021.09.006
摘要
Recommender system has recently received a lot of attention in the information service community. In many application scenarios, such as Internet of Things (IoTs) environments, item multimodal auxiliary information (such as text and image) can be obtained to expand their feature representation and to increase user satisfaction with recommendations. Motivated by this fact, this paper introduces a novel two-stage embedding model (TSEM), which adequately leverage item multimodal auxiliary information to substantially improve recommendation performance. Specifically, it encompasses two sequential stages: graph convolutional embedding (GCE) and multimodal joint fuzzy embedding (MJFE). In the former, we first generate a bipartite graph for user-item interactions, and then utilize it to construct user and item backbone features via a spatial-based graph convolutional network (SGCN). While in the latter, by employing item multimodal auxiliary information, we integrate multi-task deep learning, deterministic Softmax, and fuzzy Softmax into a convolutional neural network (CNN)-based learning framework, which is optimized to obtain user backbone features and item semantic-enhanced fuzzy (SEF) features accurately. After TSEM converges, user backbone features and item SEF features can be utilized to calculate user preferences on items via Euclidean distance. Extensive experiments over two real-world datasets show that the proposed TSEM model significantly outperforms the state-of-the-art baselines in terms of various evaluation metrics.
科研通智能强力驱动
Strongly Powered by AbleSci AI