会话(web分析)
背景(考古学)
计算机科学
人工智能
万维网
历史
考古
作者
Chhotelal Kumar,Mukesh Kumar
标识
DOI:10.1016/j.compeleceng.2024.109138
摘要
A Session-based recommender system (SBRS) captures the dynamic behavior of a user to provide recommendations for the next item in the current session. On providing the user's past interactions of ongoing sessions, the SBRS predicts the next item that a user is likely to interact with. Sessions can vary in duration, from minutes to hours. Many recommender systems prioritize longer sessions, but most datasets have more short sessions. Predicting the next item in short sessions is challenging due to limited context. Additionally, obtaining item embeddings is problematic due to the data sparsity issue in most SBRS, as they rely on one-hot encoding. A long short-term memory (LSTM) with an attention mechanism has been proposed to overcome the abovementioned issues by utilizing LSTM to capture sequential context and incorporating an attention mechanism to focus on the target items. Additionally, to overcome the data sparsity problem, the Word2Vec embedding technique has been used. The proposed model was tested on two publicly available datasets i.e., 30Music and RSC19, and results are compared with basic sequence models i.e., RNN and LSTM. LSTM achieved a 41.95% hit rate on the 30Music, while LSTM-Attention achieved 81.47% on RSC19. In summary, LSTM outperformed RNN and LSTM-Attention on 30Music, whereas LSTM with attention outperformed the other models on RSC19.
科研通智能强力驱动
Strongly Powered by AbleSci AI