The aim of this scoping review is to shed light on the current state of the art regarding ChatGPT's potential applications in clinical decision support, as well as its accuracy, sensitivity, speed, and reliability in different clinical contexts (diagnosis, differential diagnosis, treatment, triage, surgical support). Most of the articles found were original research articles, with a few reviews and commentaries. A total of 225 articles were found, of which 50 were included based on retrieval and eligibility. ChatGPT performs well in diagnosis with complete data but struggles with incomplete or ambiguous information. Its differential diagnosis is inconsistent, especially in complex cases. It shows good sensitivity in treatment recommendations but lacks personalization and requires human oversight. In triage, ChatGPT is accurate, with high sensitivity for hospitalization decisions but lower specificity for safe discharges. For surgical support, it aids in planning but cannot adapt to intraoperative changes without human input. The results indicate that ChatGPT has potential in supporting clinical decisions but also highlights significant current limitations; that include the need for medical-specific adaptation, the risk of generating false (artificial hallucinations), incomplete, or misleading information, and ethical and legal issues that need to be addressed.