Vision Transformer (ViT) models have recently emerged as powerful and versatile tools for various visual tasks. In this article, we investigate ViT in a more challenging scenario within the context of few-shot conditions. Recent work has achieved promising results in few-shot image classification by utilizing pre-trained vision transformer models. However, this work employs full fine-tuning for the downstream tasks, leading to significant overfitting and storage issues, especially in the remote sensing domain. In order to tackle these issues, we turn to the recently proposed Parameter-Efficient Tuning (PETuning) methods, which update only the newly added parameters while keeping the pre-trained backbone frozen. Inspired by these methods, we propose the Meta Visual Prompt Tuning (MVP) method. Specifically, we integrate the prompt-tuning-based PETuning method into the meta-learning framework and tailor it for remote sensing datasets, resulting in an efficient framework for Few-Shot Remote Sensing Scene Classification (FS-RSSC). Moreover, we introduce a novel data augmentation scheme that exploits patch embedding recombination to enhance the data diversity and quantity. This scheme is generalizable to any network that employs the ViT architecture as its backbone. Experimental results on the FS-RSSC benchmark demonstrate the superior performance of the proposed MVP over existing methods in various settings, including various-way-various-shot, various-way-one-shot, and cross-domain adaptation.