In multi-agent reinforcement learning, exploration is more challenging because of the large state-action space and the requirement of fine cooperation among multiple agents. We extend ICM, a curiosity-driven exploration method for single-agent environments, to the multi-agent setting and propose multi-agent curiosity-driven exploration (MACDE). We define our intrinsic reward with respect to the curiosity for a team of agents as the summation of individual agents' curiosity given by the prediction error in the next state considering other agents' actions. We evaluate MACDE in the Predator-Prey and StarCraft Multi-Agent Challenge. The results show that MACDE worked effectively and learned better policies in both environments.