强化学习
最大化
一般化
计算机科学
人工智能
启发式
人工神经网络
深度学习
图形
嵌入
数学优化
理论计算机科学
数学
数学分析
作者
Tiantian Chen,Siwen Yan,Jianxiong Guo,Weili Wu
标识
DOI:10.1109/tcss.2023.3272331
摘要
Aiming at selecting a small subset of nodes with maximum influence on networks, the influence maximization (IM) problem has been extensively studied. Since it is #P-hard to compute the influence spread given a seed set, the state-of-the-art methods, including heuristic and approximation algorithms, are faced with great difficulties such as theoretical guarantee, time efficiency, generalization, and so on. This makes it unable to adapt to large-scale networks and more complex applications. On the other side, with the latest achievements of deep reinforcement learning (DRL) in artificial intelligence and other fields, lots of work have been focused on exploiting DRL to solve combinatorial optimization (CO) problems. Inspired by this, we propose a novel end-to-end DRL framework, ToupleGDD, to address the IM problem in this article, which incorporates three coupled graph neural networks (GNNs) for network embedding and double deep $Q$ -networks (DQNs) for parameters learning. Previous efforts to solve the IM problem with DRL trained their models on subgraphs of the whole network and then tested them on the whole graph, which makes the performance of their models unstable among different networks. However, our model is trained on several small randomly generated graphs with a small budget and tested on completely different networks under various large budgets, which can obtain results very close to IMM and better results than OPIM-C on several datasets and shows strong generalization ability. Finally, we conduct a large number of experiments on synthetic and realistic datasets and experimental results prove the effectiveness and superiority of our model.
科研通智能强力驱动
Strongly Powered by AbleSci AI