误传
背景(考古学)
干预(咨询)
领域(数学)
精神科
心理学
数据科学
计算机科学
计算机安全
数学
生物
古生物学
纯数学
作者
Sebastian Volkmer,Andreas Meyer‐Lindenberg,Emanuel Schwarz
标识
DOI:10.1016/j.psychres.2024.116026
摘要
The ability of Large Language Models (LLMs) to analyze and respond to freely written text is causing increasing excitement in the field of psychiatry; the application of such models presents unique opportunities and challenges for psychiatric applications. This review article seeks to offer a comprehensive overview of LLMs in psychiatry, their model architecture, potential use cases, and clinical considerations. LLM frameworks such as ChatGPT/GPT-4 are trained on huge amounts of text data that are sometimes fine-tuned for specific tasks. This opens up a wide range of possible psychiatric applications, such as accurately predicting individual patient risk factors for specific disorders, engaging in therapeutic intervention, and analyzing therapeutic material, to name a few. However, adoption in the psychiatric setting presents many challenges, including inherent limitations and biases in LLMs, concerns about explainability and privacy, and the potential damage resulting from produced misinformation. This review covers potential opportunities and limitations and highlights potential considerations when these models are applied in a real-world psychiatric context.
科研通智能强力驱动
Strongly Powered by AbleSci AI