Recent years have witnessed a significant growth in Graph Convolutional Networks (GCNs). Being widely applied in a number of tasks, the safety issues of GCNs have draw the attention of many researchers. Recent studies have demonstrated that GCNs are vulnerable to adversarial attacks such that they are easily fooled by deliberate perturbations and a number of attacking methods have been proposed. However, state-of-the-art methods, which incorporate meta learning techniques, suffer from high computational costs. On the other hand, heuristic methods, which excel in efficiency, are in lack of satisfactory attacking performance. In order to solve this problem, it is supposed to find the patterns of gradient-based attacks to improve the performance of heuristic algorithms.