One of the major distinguishing features of Continual Learning(Cl)is that training tasks will change over time, so how to adjust the model to learn different tasks is a challenge. One of the promising solutions is to use stored historical task data to help the model retain old knowledge in the training process for new tasks. However, most existing methods do not take into account that there may be noisy labels in the training data, which will aggravate the forgetting of the old task. In this paper, we propose a replay-based method to solve the continual learning with noisy labels. We first filter the data through the consistency of labels and their feature distribution in the feature space and add it to the replay buffer for the model training. We use supervised contrastive learning to train the model. In order to avoid the loss of other data information, we use the distribution of samples in the feature space to add a pseudo label. To verify the effectiveness of our algorithm, we conducted experiments on four datasets, including three artificial noise datasets MINST, cifar10, cifar100, and a real-world noise dataset Webvision, our method can achieve the best results. The experimental results show that the way we filter data and the way we update buffer data have a very important impact on the performance of the model.