Febrile illness in returned travelers presents a diagnostic challenge in non-endemic settings. Chat generative pretrained transformer (ChatGPT) has the potential to assist in medical tasks, yet its diagnostic performance in clinical settings has rarely been evaluated. We conducted a preliminary validation assessment of ChatGPT-4o's performance in the workup of fever in returning travelers. We retrieved the medical records of returning travelers hospitalized with fever during 2009-2024. The clinical scenarios of these cases at time of presentation to the emergency department were prompted to ChatGPT-4o, using a detailed uniform format. The model was further prompted with four consistent questions concerning the differential diagnosis and recommended workup. To avoid training, we kept the model blinded to the final diagnosis. Our primary outcome was ChatGPT-4o's success rates in predicting the final diagnosis (gold standard) when requested to specify the top 3 differential diagnoses. Secondary outcomes were success rates when prompted to specify the single most likely diagnosis, and all necessary diagnostics. We also assessed ChatGPT-4o as a predicting tool for malaria and qualitatively evaluated its failures. ChatGPT-4o predicted the final diagnosis in 68% (95% CI 59-77%), 78% (95% CI 69-85%), and 83% (95% CI 74-89%) of the 114 cases, when prompted to specify the most likely diagnosis, top three diagnoses, and all possible diagnoses, respectively. ChatGPT-4o showed a sensitivity of 100% (95% CI 93-100%) and a specificity of 94% (95% CI 85-98%) for predicting malaria. The model failed to provide the final diagnosis in 18% (20/114) of cases, primarily by failing to predict globally endemic infections (16/21, 76%). ChatGPT-4o demonstrated high diagnostic accuracy when prompted with real-life scenarios of febrile returning travelers presenting to the emergency department, especially for malaria. Model training is expected to yield an improved performance and facilitate diagnostic decision-making in the field.