判断
情境伦理学
考试(生物学)
心理学
理解力
医学教育
排名(信息检索)
人口
应用心理学
计算机科学
社会心理学
医学
人工智能
政治学
古生物学
法学
生物
环境卫生
程序设计语言
标识
DOI:10.1080/14703297.2023.2258114
摘要
This study examines the proficiency of Chat GPT, an AI language model, in answering questions on the Situational Judgement Test (SJT), a widely used assessment tool for evaluating the fundamental competencies of medical graduates in the UK. A total of 252 SJT questions from the Oxford Assess and Progress: Situational Judgement Test book were sampled, encompassing 82 multiple-choice and 170 ranking questions. Chat GPT exhibited a commendable mean accuracy of 77.67% with a standard error of 1.09% when compared to the book's answers. While precise population statistics and a definitive scoring system remain unavailable, it is worth acknowledging the AI's consistent performance across all five domains tested. To significantly enhance its effectiveness and practical utility, especially in aiding junior doctors with complex ethical dilemmas, further advancements are imperative to strengthen its decision-making capabilities beyond factual comprehension.
科研通智能强力驱动
Strongly Powered by AbleSci AI