With respect to short texts with high information content, unstructured and non-standard, a text classification model (BERT-CNN-BiLSTM) based on the fusion of BERT model and BiLSTM network with convolutional neural network is proposed. To improve data processing efficiency and classification precision, word vectors are trained in BERT and used as the embedding layer of the model. The embedding layer is utilized to retain semantic information and the semantic representation of words is enhanced. CNN is applied to extract the local semantics of text. Meanwhile, gated linear unit (GLU) is used to optimise the CNN, and gradient dispersion is reduced. BiLSTM is designed to acquire contextual information about the text. Text classification is better implemented. The experimental results show that better results are obtained by BERT training data as word vectors. The BERT-CNN-BiLSTM has significantly improved in terms of classification precision, recall and F1 than the CNN, the BERT-CNN, et al. Precision, recall and F1 values are improved by at least 1.44%, 1.66% and 1.69%, respectively.