Unsupervised person re-identification (Re-ID) aims to learn discriminative features without human-annotated labels. Recently, contrastive learning has provided a new prospect for unsupervised person Re-ID, and existing methods primarily constrain the feature similarity among easy sample pairs. However, the feature similarity among hard sample pairs is neglected, which yields suboptimal performance in unsupervised person Re-ID. In this paper, we propose a novel Hybrid Contrastive Model (HCM) to perform the identity-level contrastive learning and the image-level contrastive learning for unsupervised person Re-ID, which adequately explores feature similarities among hard sample pairs. Specifically, for the identity-level contrastive learning, an identity-based memory is constructed to store pedestrian features. Accordingly, we define the dynamic contrast loss to identify identity information with dynamic factor for distinguishing hard/easy samples. As for the image-level contrastive learning, an image-based memory is established to store each image feature. We design the sample constraint loss to explore the similarity relationship between hard positive and negative sample pairs. Furthermore, we optimize the two contrastive learning processes in one unified framework to make use of their own advantages as so to constrain the feature distribution for extracting potential information. Extensive experiments demonstrate that the proposed HCM distinctly outperforms existing methods.