强化学习
计算机科学
人工智能
人工神经网络
因果结构
机器学习
图形
变量(数学)
理论计算机科学
数学
量子力学
物理
数学分析
作者
Amir Amirinezhad,Saber Salehkaleybar,Matin Hashemi
标识
DOI:10.1016/j.neunet.2022.06.028
摘要
We study the problem of experiment design to learn causal structures from interventional data. We consider an active learning setting in which the experimenter decides to intervene on one of the variables in the system in each step and uses the results of the intervention to recover further causal relationships among the variables. The goal is to fully identify the causal structures with minimum number of interventions. We present the first deep reinforcement learning based solution for the problem of experiment design. In the proposed method, we embed input graphs to vectors using a graph neural network and feed them to another neural network which outputs a variable for performing intervention in each step. Both networks are trained jointly via a Q-iteration algorithm. Experimental results show that the proposed method achieves competitive performance in recovering causal structures with respect to previous works, while significantly reducing execution time in dense graphs.
科研通智能强力驱动
Strongly Powered by AbleSci AI