Like fingerprints, facial information and other biological characteristics, human voice also carries the physiological characteristics of living things. It is unique, stable personal information that cannot be stolen or lost. The speaker's voiceprint can contain changes that remain unchanged, so these features make the voiceprint features deeper, more elusive, and more difficult to forge, making the authentication stronger and safer. Based on the basic characteristics of human voiceprints, this paper designs a voiceprint recognition system based on neural networks. In order to enable the network to better use original data to obtain output results, time and frequency domain masking methods are used for data enhancement. In the network part, the encoder-decoder method uses the transformer architecture to achieve end-to-end data processing, and uses the triplet loss function to evaluate and optimize the parameters within the neural network to improve the prediction accuracy of the model. Modeling experiments were conducted on the LibriSpeech and CN-Celeb datasets, respectively. The system realizes human voiceprint recognition end-to-end based on deep learning and has been tested to meet the design needs.