计算机科学
图形
理论计算机科学
人工智能
人工神经网络
归纳偏置
知识图
语义学(计算机科学)
多任务学习
程序设计语言
经济
管理
任务(项目管理)
作者
Yujia Li,Daniel Tarlow,Marc Brockschmidt,Richard S. Zemel
出处
期刊:Cornell University - arXiv
日期:2015-01-01
被引量:1429
标识
DOI:10.48550/arxiv.1511.05493
摘要
Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.
科研通智能强力驱动
Strongly Powered by AbleSci AI