概化理论
计算机科学
判别式
稳健性(进化)
一般化
人工智能
机器学习
学习迁移
集合(抽象数据类型)
自然语言处理
程序设计语言
基因
数学
化学
统计
数学分析
生物化学
作者
Xiangyu Liu,Yanlei Shang,Yong Chen
标识
DOI:10.1145/3652583.3658106
摘要
Prompt learning has been proven to be quite an effective technique for adapting large visual-language models (LVLMs) to downstream tasks via few-shot learning. Early methods often rely on a single prompt, which is insufficient for comprehensively representing a class. Subsequent efforts have explored multiple prompts to further enhance the adaptability and performance of LVLMs. However, these methods primarily focus on learning a set of more discriminative prompts, overlooking their generalizability. To learn prompts that are more balanced in both generalization and discrimination, we propose a novel multi-prompt learning approach, Masked Multi-Prompt Learning with Knowledge Mixing (dubbed TriMPL), which contains two pivotal mechanisms: (1) knowledge mixing to enhance the generalization of each individual prompt and (2) prompt masking to boost the prompt set's overall robustness. With respect to knowledge mixing, it progressively injects the general knowledge of handcrafted prompts into each learnable prompt at different Transformer encoding stages. While for prompt masking, of which the critical insight is that an optimal set of prompts should exhibit independence, allowing accurate predictions with just a subset of prompts. During training, TriMPL randomly masks some prompts to enhance the overall robustness of the learned prompts for image classification. We evaluate the effectiveness of TriMPL under three settings: (1) base-to-new generalization, (2) cross-dataset transfer, and (3) domain generalization. Extensive experiments demonstrate that TriMPL is capable of learning a set of effective prompts, achieving superior performance to quite a few state-of-the-art competitors.
科研通智能强力驱动
Strongly Powered by AbleSci AI