Named Entity Recognition (NER) is a fundamental task in natural language processing. Syntax plays a significant role in helping to recognize the boundaries and types of entities. In comparison to English, Chinese NER, due to the absence of explicit delimiters, often faces challenges in determining entity boundaries. Similarly, syntactic parsing results can also lead to errors caused by wrong segmentation. In this paper, we propose the dual-grained syntax-aware Transformer network to mitigate the noise from single-grained syntactic parsing results by incorporating dual-grained syntactic information. Specifically, we first introduce syntax-aware Transformers to model dual-grained syntax-aware features and a contextual Transformer to model contextual features. We then design a triple feature aggregation module to dynamically fuse these features. We validate the effectiveness of our approach on three public datasets.