Mingsheng Fu,Liwei Huang,Ananya Rao,Athirai A. Irissappane,Jie Zhang
出处
期刊:IEEE Transactions on Industrial Informatics [Institute of Electrical and Electronics Engineers] 日期:2023-02-01卷期号:19 (2): 2049-2061被引量:1
标识
DOI:10.1109/tii.2022.3209290
摘要
Deep reinforcement learning (DRL) based recommender systems are suitable for user cold-start problems as they can capture user preferences progressively. However, most existing DRL-based recommender systems are suboptimal, since they use the same policy to suit the dynamics of different users. We reformulate recommendation as a multitask Markov Decision Process, where each task represents a set of similar users. Since similar users have closer dynamics, a task-specific policy is more effective than a single universal policy for all users. To make recommendations for cold-start users, we use a default policy to collect some initial interactions to identify the user task, after which a task-specific policy is employed. We use Q-learning to optimize our framework and consider the task uncertainty by the mutual information regarding tasks. Experiments are conducted on three real-world datasets to verify the effectiveness of our proposed framework.