The vast majority of real-world data follows a long-tail distribution, wherein there is a large number of data points in the head category and a small number in the tail category. The efficacy of two-stage training surpasses that of end-to-end training for long-tail visual classification tasks. Nevertheless, in practical applications, the prevalence lies with the one-stage end-to-end model due to its ease of deployment. Recently, supervised contrastive learning has been employed to address the long-tail distribution with notable accomplishments. Both methodologies aim to mitigate the repulsive influence of the dominant class, while simultaneously striving for an equitable distribution of all classes across the hypersphere. We find that on the basis of the work of the former, giving a dynamically adjusted weighting factor to a class with the classification layer weight as the prior knowledge can increase the number of negative sample pairs for the tail class, thereby enhancing model attention and improving comparison accuracy. In order to further improve the tail class accuracy and the generalization ability of the model, this paper proposes a supervised contrastive learning network based on multi-view compensation feature fusion. The utilization of multi-view input in the network facilitates the incorporation of comprehensive representation information into the classification network, thereby augmenting the semantic understanding of samples in the contrastive learning network. Consequently, this leads to an enhancement in tail accuracy through the application of a dynamic weighted balanced loss function. In a small batch size, the proposed network achieves an average Top1 accuracy of 83.293% and 55.092% on Cifar10-LT and Cifar100-LT datasets respectively, with an imbalance factor of 0.01, thereby yielding significant results.