图像增强
计算机科学
图像(数学)
图像质量
人工智能
计算机视觉
质量(理念)
物理
量子力学
作者
Han Huang,Yuqi Huo,Zijia Zhao,Haoyu Lu,Shu Wu,Bingning Wang,Qiang Liu,Weipeng Chen,Li Wang
出处
期刊:Cornell University - arXiv
日期:2024-10-21
标识
DOI:10.48550/arxiv.2410.16166
摘要
Multimodal large language models (MLLMs) have made significant strides by integrating visual and textual modalities. A critical factor in training MLLMs is the quality of image-text pairs within multimodal pretraining datasets. However, $\textit {de facto}$ filter-based data quality enhancement paradigms often discard a substantial portion of high-quality image data due to inadequate semantic alignment between images and texts, leading to inefficiencies in data utilization and scalability. In this paper, we propose the Adaptive Image-Text Quality Enhancer (AITQE), a model that dynamically assesses and enhances the quality of image-text pairs. AITQE employs a text rewriting mechanism for low-quality pairs and incorporates a negative sample learning strategy to improve evaluative capabilities by integrating deliberately selected low-quality samples during training. Unlike prior approaches that significantly alter text distributions, our method minimally adjusts text to preserve data volume while enhancing quality. Experimental results demonstrate that AITQE surpasses existing methods on various benchmark, effectively leveraging raw data and scaling efficiently with increasing data volumes. We hope our work will inspire future works. The code and model are available at: https://github.com/hanhuang22/AITQE.
科研通智能强力驱动
Strongly Powered by AbleSci AI