The point cloud is a densely distributed 3D (three-dimensional) data, and annotating the point cloud is a time-consuming and labor-intensive work. The existing semantics segmentation work adopts few-shot learning to reduce the dependence on labeling samples while improving the generalization of the model to new categories. Since point clouds are 3D structures with rich geometric features, even objects of the same category have feature differences that cannot be ignored. Therefore, a few samples (support set) used to train the model do not cover all the features of this category. There is a distribution difference between the support samples and the samples used to verify the model performance (query set). In this paper, we propose an efficient point cloud few-shot segmentation method based on prototypes for bias rectification. A prototype is a vector representation of a category in the metric space. To make the prototype representation of the support set closer to the query set features, we define a feature bias term and reduce the distribution distance between the two sets by fusing the support set features and the bias term. On this basis, we design a feature cross-reference module. By mining the co-occurring features of the support and query sets, it can generate a more representative prototype which captures the overall features of the point cloud. Extensive experiments on two challenging datasets demonstrate that our method outperforms the state-of-the-art method by an average of 3.31 $\%$ in several N-way K-shot tasks, and achieves approximately 200 times faster reasoning speed. Our code is available at https://github.com/964918993/2CBR .