The emergence of large language models (LLM), like GPT, is revolutionizing the field of information retrieval, finding applications across a wide range of domains. However, the intricate domain knowledge and the unique software paradigms inherent to the manufacturing sector have posed significant barriers to the effective utilization of LLM. To address this divide, an error-assisted fine-tuning approach is proposed to adapt LLM specifically for the manufacturing domain. Initially, the LLM is fine-tuned using a manufacturing-domain corpus, allowing it to learn and adapt to the nuances of the manufacturing field. Additionally, the injection of a labeled dataset into a pre-configured LLM enhances its ability to identify key elements within the domain. To ensure the generation of syntactically valid programs in domain-specific languages, and to accommodate environmental constraints, an error-assisted iterative prompting procedure is introduced, which facilitates the generation of reliable and expected code. Experimental results demonstrate the model's proficiency in accurately responding to manufacturing-related queries and its effectiveness in generating reliable code, where the accuracy of judgment querying can experience an improvement of approximately 4.1%. By expanding the applicability of LLM to the manufacturing industry, it is hoped that this research will pave the way for a broad array of new LLM-based applications within manufacturing.