Biomedical entity normalization (BEN) aims to link the entity mentions in a biomedical text to referent entities in a knowledge base. Recently, the paradigm of large-scale language model pre-training and fine-tuning have achieved superior performance in BEN task. However, pre-trained language models like SAPBERT [21] typically contain hundreds of millions of parameters, and fine-tuning all parameters is computationally expensive. The latest research such as prompt technology is proposed to reduce the amount of parameters during the model training. Therefore, we propose a framework Prompt-BEN using continuous Prompt to enhance BEN, which just needs to fine-tune few parameters of prompt. Our method employs embeddings with the continuous prefix prompt to capture the semantic similarity between mention and terms. We also design a contrastive loss with synonym marginalized strategy for the BEN task. Finally, experimental results on three benchmark datasets demonstrated that our method achieves competitive or even greater linking accuracy than the state-of-the-art fine-tuning-based models while having about 600 times fewer tuned parameters.