Generative adversarial network is a popular deep learning technique for solving artificial intelligence tasks, and it has been widely studied and applied for processing images, voices, texts and so on. Especially, generative adversarial network is adopted in the field of image processing, such as image style transfer, image restoration, image super-resolution and so on. Although generative adversarial networks show remarkable success in image generation, training process is usually unstable and trained models collapse where many of the generated images may contain the same color or texture pattern. In this paper, the network of generator and discriminator are modified, and the residual block is added to the generative adversarial network architecture to learn better image features. To reduce the loss of image feature during training and get more features to stabilize image generation, we use feature matching to minimize feature loss between the real and generated images for stable training. In the experiment, performance improvement can be obtained by adopting our proposed method, which is also better than some state-of-the-art methods.