Electroencephalography (EEG) is a vital noninvasive technique used in neuroscience research and clinical diagnosis. However, EEG data have a complex nonEuclidean structure and are often scarce, making training effective graph neural network (GNN) models difficult. We propose a "pre-train, prompt" framework in graph neural networks for EEG analysis, called GNN-based EEG Prompt Learning (GEPL). The framework first uses unsupervised contrastive learning to pre-train on a large-scale EEG dataset. It then transfers the generic EEG knowledge learned by the model to target EEG datasets through graph prompt learning, thereby enhancing the model's performance with a limited amount of EEG data from the target domain. We tested the framework on five EEG datasets, and the results showed that GEPL outperformed traditional fine-tuning methods in classification accuracy and area under the ROC curve (AUC). GEPL demonstrated improved generalization, robustness, and computational efficiency, thereby significantly reducing the overfitting risks associated with limited EEG data. Moreover, the model provided interpretable results, highlighting relevant brain regions during classification tasks. This research suggests that the "pre-train, prompt" paradigm is well-suited for EEG analysis and offers potential applications in other domains where data are limited.