计算机科学
预处理器
人工智能
自动汇总
情态动词
模式
滤波器(信号处理)
光学(聚焦)
模式识别(心理学)
模态(人机交互)
判别式
机器学习
计算机视觉
社会学
物理
光学
化学
高分子化学
社会科学
作者
Binghao Tang,Boda Lin,Zheng Chang,Si Li
标识
DOI:10.1016/j.neucom.2024.128270
摘要
Previous studies about MultiModal Summarization (MMS) mainly focus on effective selection and filtering of visual features to assist in cross-modal fusion and text-based generation. However, there exists a natural disparity between the distributions of features from different modalities which limits a more comprehensive cross-modal fusion for MMS models. In this paper, we propose to utilize Maximum Mean Discrepancy (MMD) to align the features from two modalities, design a filter to further denoise the visual features, and conduct cross-modal fusion based on generative pre-trained language models for better cross-modal fusion and text generation. Moreover, we notice the presence of some special tokens in the MMS dataset which are introduced in prior data preprocessing. This phenomenon could limit the performance of contemporary generative models. Thus we adopt the powerful Large Language Model (LLM) to preprocess the dataset to enhance MMS models. Experimental results on the original MMS dataset demonstrate that our proposed method is effective and outperforms previous strong baselines. Experimental results on the preprocessed MMS dataset also demonstrate the feasibility of incorporating LLM in the data preprocessing to enhance MMS models.
科研通智能强力驱动
Strongly Powered by AbleSci AI