Prompt learning has been proven to be quite an effective technique for adapting large visual-language models (LVLMs) to downstream tasks via few-shot learning. Early methods often rely on a single prompt, which is insufficient for comprehensively representing a class. Subsequent efforts have explored multiple prompts to further enhance the adaptability and performance of LVLMs. However, these methods primarily focus on learning a set of more discriminative prompts, overlooking their generalizability. To learn prompts that are more balanced in both generalization and discrimination, we propose a novel multi-prompt learning approach, Masked Multi-Prompt Learning with Knowledge Mixing (dubbed TriMPL), which contains two pivotal mechanisms: (1) knowledge mixing to enhance the generalization of each individual prompt and (2) prompt masking to boost the prompt set's overall robustness. With respect to knowledge mixing, it progressively injects the general knowledge of handcrafted prompts into each learnable prompt at different Transformer encoding stages. While for prompt masking, of which the critical insight is that an optimal set of prompts should exhibit independence, allowing accurate predictions with just a subset of prompts. During training, TriMPL randomly masks some prompts to enhance the overall robustness of the learned prompts for image classification. We evaluate the effectiveness of TriMPL under three settings: (1) base-to-new generalization, (2) cross-dataset transfer, and (3) domain generalization. Extensive experiments demonstrate that TriMPL is capable of learning a set of effective prompts, achieving superior performance to quite a few state-of-the-art competitors.