Recently, character-word lattice structures have achieved promising results for Chinese named entity recognition (NER), reducing word segmentation errors and increasing word boundary information for character sequences. However, constructing the lattice structure is complex and time-consuming, thus these lattice-based models usually suffer from low inference speed. Moreover, the quality of the lexicon affects the accuracy of the NER model. Since noise words can potentially confuse NER, limited coverage of the lexicon can cause lattice-based models to degenerate into partial character-based models. In this article, we propose a hierarchical label-enhanced contrastive learning (HLCL) method for Chinese NER. Instead of relying on the lattice structure, HLCL offers an alternative solution to robustly integrate entity boundary and type information with the help of both labels semantic and contrastive learning. HLCL is empowered by two techniques: 1) sentence-level contrastive learning (SCL) to model global mutual information between two different modalities (e.g., labels and sentences) and 2) token-level contrastive learning (TCL) to close the gap between representations of different characters (e.g., label-enhanced characters and original characters), resulting in local mutual information. With the well-designed contrastive learning scheme and the concise model during inference, HLCL can fully leverage the transferable label semantic and has a superb speed of inference. Experiments on four Chinese NER datasets show that HLCL obtains excellent efficiency as well as performance compared with existing lattice-based approaches.