冠军
人工智能
人工神经网络
强化学习
计算机科学
选择(遗传算法)
树(集合论)
机器学习
领域(数学分析)
数学
政治学
数学分析
法学
作者
David Silver,Julian Schrittwieser,Karen Simonyan,Ioannis Antonoglou,Aja Huang,Arthur Guez,Thomas Hubert,Lucas Baker,Matthew Lai,Adrian Bolton,Yutian Chen,Timothy Lillicrap,Hui Fan,Laurent Sifre,George van den Driessche,Thore Graepel,Demis Hassabis
出处
期刊:Nature
[Nature Portfolio]
日期:2017-10-01
卷期号:550 (7676): 354-359
被引量:8359
摘要
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo.
科研通智能强力驱动
Strongly Powered by AbleSci AI