强化学习
计算机科学
启发式
钥匙(锁)
班级(哲学)
集合(抽象数据类型)
人工智能
复杂网络
深度学习
理论计算机科学
分布式计算
计算机安全
万维网
程序设计语言
作者
Changjun Fan,Li Zeng,Yizhou Sun,Yang‐Yu Liu
标识
DOI:10.1038/s42256-020-0177-2
摘要
Finding an optimal set of nodes, called key players, whose activation (or removal) would maximally enhance (or degrade) a certain network functionality, is a fundamental class of problems in network science. Potential applications include network immunization, epidemic control, drug design and viral marketing. Due to their general NP-hard nature, these problems typically cannot be solved by exact algorithms with polynomial time complexity. Many approximate and heuristic strategies have been proposed to deal with specific application scenarios. Yet, we still lack a unified framework to efficiently solve this class of problems. Here, we introduce a deep reinforcement learning framework FINDER, which can be trained purely on small synthetic networks generated by toy models and then applied to a wide spectrum of application scenarios. Extensive experiments under various problem settings demonstrate that FINDER significantly outperforms existing methods in terms of solution quality. Moreover, it is several orders of magnitude faster than existing methods for large networks. The presented framework opens up a new direction of using deep learning techniques to understand the organizing principle of complex networks, which enables us to design more robust networks against both attacks and failures. A fundamental problem in network science is how to find an optimal set of key players whose activation or removal significantly impacts network functionality. The authors propose a deep reinforcement learning framework that can be trained on small networks to understand the organizing principles of complex networked systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI