Integrating Artificial Intelligence (AI) into public health education represents a pivotal advancement in medical knowledge dissemination, particularly for chronic diseases such as asthma. This study assesses the accuracy and comprehensiveness of ChatGPT, a conversational AI model, in providing asthma-related information. Employing a rigorous mixed-methods approach, healthcare professionals evaluated ChatGPT's responses to the Asthma General Knowledge Questionnaire for Adults (AGKQA), a standardized instrument covering various asthma-related topics. Responses were graded for accuracy and completeness and analyzed using statistical tests to assess reproducibility and consistency. ChatGPT showed notable proficiency in conveying asthma knowledge, with flawless success in the etiology and pathophysiology categories and substantial accuracy in medication information (70%). However, limitations were noted in medication-related responses, where mixed accuracy (30%) highlights the need for further refinement of ChatGPT's capabilities to ensure reliability in critical areas of asthma education. Reproducibility analysis demonstrated a consistent 100% rate across all categories, affirming ChatGPT's reliability in delivering uniform information. Statistical analyses further underscored ChatGPT's stability and reliability. These findings underscore ChatGPT's promise as a valuable educational tool for asthma while emphasizing the necessity of ongoing improvements to address observed limitations, particularly regarding medication-related information.