In this paper, we propose a signed distance field (SDF)-based deep Q-learning framework for multi-object re-arrangement. Our method learns to rearrange objects with non-prehensile manipulation, e.g., pushing, in unstructured environments. To reliably estimate Q-values in various scenes, we train the Q-network using an SDF-based scene graph as the state-goal representation. To this end, we introduce SDFGCN, a scalable Q-network structure which can estimate Q-values from a set of SDF images satisfying permutation invariance by using graph convolutional networks. In contrast to grasping-based rearrangement methods that rely on the performance of grasp predictive models for perception and movement, our approach enables rearrangements on unseen objects, including hard-to-grasp objects. Moreover, our method does not require any expert demonstrations. We observe that SDFGCN is capable of unseen objects in challenging configurations, both in the simulation and the real world.