In this paper, we present an improved acoustic model CNN-DFSMN , and it uses CNN to study local frequency domain and time domain features ,and introduces skip connections between memory blocks in adjacent layers, thus alleviating the problem of gradient disappearance when building very deep structures. In recent years, the acoustic model based on Connected Temporal Classification (CTC) has achieved good performance in speech recognition. Generally, lstm-type networks are used as acoustic models in CTC. However, LSTM calculation cost is high and sometimes it is hard to train CTC criteria. This paper, Be inspired by DFSMN's work, we replace LSTM with DFSMN in the acoustic modeling based on CCT, then combine convolution neural network (CNN) with this architecture to train an acoustic model based on CNN-DFSMN-CTC, match the acoustic model with the 3-gram language model, and combine dictionary and acoustic feature vector to identify and decode the recognition text. This further improves the performance of Tibetan speech recognition. The last experiment results show that the WER of DFSMN-CTC based methods is 2.34% and 0.94% higher than that of CNN-CTC based and LSTM-CTC based methods under the same test set. The recognition rate based on CNN-DFSMN-CTC is 3.52% and 2.23% higher than that based on DFSMN and DFSMN-CTC.