We proposed a new way to represent and reconstruct multidimensional MR images. Specifically, a representation capable of disentangling different types of features in high-dimensional images was learned via training an autoencoder with separated sets of latent spaces for image style transfer, e.g., contrast or geometry transfer. A latent diffusion model was introduced to capture the distributions of the disentangled latents for constrained reconstruction. A new formulation was developed to integrate the pre-learned representation with other complementary constraints for reconstruction from sparse data. We demonstrated the ability of our model to disentangle contrast and geometry features in multicontrast MR images, and its effectiveness in accelerated T1 and T2 mapping.