Text classification as a fundamental task in Natural Language Processing (NLP). Graph neural networks can better handle the large amount of information in text, and effective and fast graph models for text classification have received much attention. Besides, most methods are transductive learning, which means they cannot handle the documents with new words and relations. To tackle these problems, we propose a novel method for Text Classification by Fusing Contextual Information via Graph Neural Networks (TextFCG). Concretely, we first construct a single graph for all words in each text and label the edges by fusing its various contextual relations. Our text graph contains different information of documents and enhances the connectivity of graph by introducing more typed edges, which improves the learning effect of GNN. Then, based on GNN and gated recurrent unit (GRU), our model can interact the local words with global text information and enhance the sequential representation of nodes. Moreover, we focus on contextual features from the text itself. Extensive experiments on several benchmark datasets and detailed analysis prove the effectiveness of our proposed method on the text classification task.