Manchu language, a minority language of China, is of significant historical and research value. An increasing number of Manchu documents are digitized into image format for better preservation and study. Recently, many researchers focused on identifying Manchu words in digitized documents. In previous approaches, a variety of Manchu words are recognized based on visual cues. However, we notice that visual-based approaches have some obvious drawbacks. On one hand, it is difficult to distinguish between similar and distorted letters. On the other hand, portions of letters obscured by breakage and stains are hard to identify. To cope with these two challenges, we propose a visual-language framework, namely the Visual-Language framework for Manchu word Recognition (VLMR), which fuses visual and semantic information to accurately recognize Manchu words. Whenever visual information is not available, the language model can automatically associate the semantics of words. The performance of our method is further enhanced by introducing a self-knowledge distillation network. In addition, we created a new handwritten Manchu word dataset named (HMW), which contains 6,721 handwritten Manchu words. The novel approach is evaluated on WMW and HMW. The experiments show that our proposed method achieves state-of-the-art performance on both datasets.