Deep learning-based method for medical image fusion has become a hot topic in recent years. However, they ignore the expression of the most important features in image fusion and only extract the general features for medical image fusion, which will restrict the expression of unique information on the fusion image. To address this restriction, we propose a novel disentangled representation network for medical image fusion with mutual information estimation, which extract the disentangled features of medical image fusion, i.e., the shared and exclusive features between multi-model medical images. In our method, we use the cross mutual information method to obtain the shared features of each modality pair, which enforce the fusion network to achieve the maximum of mutual information estimation for multi-modal medical images. The exclusive features are extracted by the adversarial objective method and it constrains the fusion network with the optimization to the minimum of mutual information estimation between shared and exclusive features. These disentangled features effectively take the interpretative advantages and make the fusion image retaining more details from source images as well as improving the visual quality of fusion image. Our method has achieved better results than several state-of-the-art methods. Both qualitative and quantitative experiments have proved the superiority of our method.