We aim to address the challenging task of infrared and visible image fusion. The existed fusion methods cannot achieve the balance of clear boundaries and rich details. In this paper, we propose a novel fusion model using a triple-discriminator generative adversarial network, which can achieve the balance. The difference image obtained by image subtraction can highlight the difference information, extract image details, and obtain the target outlines in some scenes. Therefore, besides the visible discriminator and infrared discriminator, a new difference image discriminator is added to retain the difference between infrared and visible images, thereby improving the contrast of infrared targets and keeping the texture details in visible images. Multi-level features extracted by the discriminators are used for information measurement, and as a result, deriving perceptual fusion weights for adaptive fusion. SSIM loss function and target edge-enhancement loss are also introduced to improve the quality of the fused image. Compared with existing state-of-the-art fusion methods on public datasets, it is demonstrated that our model has a better performance on quantitative metrics and qualitative effects.