Domain generalization refers to the challenge of training a model from various source domains that can generalize well to unseen target domains. Contrastive learning is a promising solution that aims to learn domain-invariant representations by utilizing rich semantic relations among sample pairs from different domains. One simple approach is to bring positive sample pairs from different domains closer, while pushing negative pairs further apart. However, in this paper, we find that directly applying contrastive-based methods is not effective in domain generalization. To overcome this limitation, we propose to leverage a novel contrastive learning approach that promotes class-discriminative and class-balanced features from source domains. Essentially, clusters of sample representations from the same category are encouraged to cluster, while those from different categories are spread out, thus enhancing the model's generalization capability. Furthermore, most existing contrastive learning methods use batch normalization, which may prevent the model from learning domain-invariant features. Inspired by recent research on universal representations for neural networks, we propose a simple emulation of this mechanism by utilizing batch normalization layers to distinguish visual classes and formulating a way to combine them for domain generalization tasks. Our experiments demonstrate a significant improvement in classification accuracy over state-of-the-art techniques on popular domain generalization benchmarks, including Digits-DG, PACS, Office-Home and DomainNet.