作者
Bin Wu,Xun Su,Jing Liang,Zhongchuan Sun,Lihong Zhong,Yangdong Ye
摘要
Recent Transformer-based architectures have achieved encouraging performance for sequential recommendation, whereas their computational complexity is quadratic to the sequence length. MLP4Rec is a promising solution to settle this issue, which captures item transition patterns by a MLP-Mixer layer. Despite effectiveness, we argue that it still faces two critical limitations. On the one hand, it employs the one-hot ID technique to generate each user/item embedding, which has no specific semantics apart from being an identifier. In this case, given these ID embeddings as the original input of a MLP-Mixer layer, it is non-trivial to distill the useful information for other layers. On the other hand, it fails to explicitly differentiate the significance of different factors of an item, which is unrealistic to capture the user’s true taste in a short context; meanwhile, it also does not discriminate the importance of each item instance given the recent actions of a user. To overcome such two limitations, we propose a new solution for sequential recommendation, namely a graph Gating-Mixer Recommender (GMRec). Our solution decomposes the sequential recommendation workflow into three steps. First, by means of graph neural networks, we embed a linear graph propagation module to produce high-quality user and item embeddings. Afterwards, we replace the MLP-Mixer layer in MLP4Rec with a devised dual gating block, which could dynamically control what features and which items can be passed to the downstream layers. Lastly, we devise a user-specific gating strategy to adaptively integrate two components in GMRec. Extensive experiments are performed on the Beauty, Cellphone, Gowalla, and ML-10M datasets, demonstrating the rationality and effectiveness of our solution. Specifically, when Precision@10, Recall@10, MAP@10, and NDCG@10 are adopted as evaluation metrics, the performance gains of GMRec over recent state-of-the-art methods on four datasets are 11.91%, 19.46%, 9.56%, and 13.01%, respectively. Our implemented codes and datatsets are available via https://github.com/wubinzzu/GMRec.