Previous studies about MultiModal Summarization (MMS) mainly focus on effective selection and filtering of visual features to assist in cross-modal fusion and text-based generation. However, there exists a natural disparity between the distributions of features from different modalities which limits a more comprehensive cross-modal fusion for MMS models. In this paper, we propose to utilize Maximum Mean Discrepancy (MMD) to align the features from two modalities, design a filter to further denoise the visual features, and conduct cross-modal fusion based on generative pre-trained language models for better cross-modal fusion and text generation. Moreover, we notice the presence of some special tokens in the MMS dataset which are introduced in prior data preprocessing. This phenomenon could limit the performance of contemporary generative models. Thus we adopt the powerful Large Language Model (LLM) to preprocess the dataset to enhance MMS models. Experimental results on the original MMS dataset demonstrate that our proposed method is effective and outperforms previous strong baselines. Experimental results on the preprocessed MMS dataset also demonstrate the feasibility of incorporating LLM in the data preprocessing to enhance MMS models.