嵌入
计算机科学
强化学习
图形
比例(比率)
近似算法
理论计算机科学
最大化
人工智能
时差学习
数学优化
机器学习
算法
数学
量子力学
物理
作者
Chao Wang,Yiming Liu,Xiaofeng Gao,Guihai Chen
标识
DOI:10.1007/978-3-030-73197-7_48
摘要
Social influence maximization problem has been widely studied by the industrial and theoretical researchers over the years. However, with the skyrocketing scale of networks and growing complexity of application scenarios, traditional approximation approaches suffer from weak approximation guarantees and bad empirical performances. What’s more, they can’t be applied to new users in dynamic network. To tackle those problems, we introduce a social influence maximization algorithm via graph embedding and reinforcement learning. Nodes are represented in the graph with their embedding, and then we formulate a reinforcement learning model where both the states and the actions can be represented with vectors in low dimensional space. Now we can deal with graphs under various scenarios and sizes, just by learning parameters for the deep neural network. Hence, our model can be applied to both large-scale and dynamic social networks. Extensive real-world experiments show that our model significantly outperforms baselines across various data sets, and the algorithm learned on small-scale graphs can be generalized to large-scale ones.
科研通智能强力驱动
Strongly Powered by AbleSci AI