Recent advancements in large language models (LLMs), particularly GPT‐3.5 and GPT‐4, have sparked significant interest in their application within the medical field. This research offers a detailed comparative analysis of the abilities of GPT‐3.5 and GPT‐4 in the context of annotating radiology reports and generating impressions from chest computed tomography (CT) scans. The primary objective is to use these models to assist healthcare professionals in handling routine documentation tasks. Employing methods such as in‐context learning (ICL) and retrieval‐augmented generation (RAG), the study focused on generating impression sections from radiological findings. Comprehensive evaluation was applied using a variety of metrics, including recall‐oriented understudy for gisting evaluation (ROUGE) for n‐gram analysis, Instructor Similarity for contextual similarity, and BERTScore for semantic similarity, to assess the performance of these models. The study shows distinct performance differences between GPT‐3.5 and GPT‐4 across both zero‐shot and few‐shot learning scenarios. It was observed that certain prompts significantly influenced the performance outcomes, with specific prompts leading to more accurate impressions. The RAG method achieved a superior BERTScore of 0.92, showcasing its ability to generate semantically rich and contextually accurate impressions. In contrast, GPT‐3.5 and GPT‐4 excel in preserving language tone, with Instructor Similarity scores of approximately 0.92 across scenarios, underscoring the importance of prompt design in effective summarization tasks. The findings of this research emphasize the critical role of prompt design in optimizing model efficacy and point to the significant potential for further exploration in prompt engineering. Moreover, the study advocates for the standardized integration of such advanced LLMs in healthcare practices, highlighting their potential to enhance the efficiency and accuracy of medical documentation.