This paper studies a class of multi-agent graphical games denoted by differential graphical games, where interactions between agents are prescribed by a communication graph structure. Ideas from cooperative control are given to achieve synchronization among the agents to a leader dynamics. New coupled Bellman and Hamilton-Jacobi-Bellman equations are developed for this class of games using Integral Reinforcement Learning. Nash solutions are given in terms of solutions to a set of coupled continuous-time Hamilton-Jacobi-Bellman equations. A multi-agent policy iteration algorithm is given to learn the Nash solution in real time without knowing the complete dynamic models of the agents. A proof of convergence for this algorithm is given. An online multi-agent method based on policy iterations is developed using a critic network to solve all the Hamilton-Jacobi-Bellman equations simultaneously for the graphical game.