生成语法
斯科普斯
医学
舞蹈
人工智能
工程伦理学
梅德林
计算机科学
法学
视觉艺术
政治学
工程类
艺术
标识
DOI:10.1016/j.joms.2023.09.015
摘要
In the fast-evolving landscape of medicine, the marriage between generative artificial intelligence (AI) and medical ethics presents a rich tapestry of opportunities and challenges, both exhilarating and daunting. As these 2 entities intertwine, it's crucial to deliberate on their implications, especially in fields like oral and maxillofacial surgery (OMS) where patient welfare and ethical standards sit at the forefront. Generative AI, primarily known for its ability to create content - from art to textual outputs, holds enormous potential in medicine. By synthesizing vast swaths of medical data, it can produce patient histories, suggest treatment plans, or even create 3D models for surgical planning. While the potential for increased efficiency and improved patient outcomes is undeniable, the introduction of AI-generated content in patient care raises crucial ethical concerns that must be addressed. First and foremost, there's the matter of authenticity. When a physician reads a patient history or examines a model, there's an intrinsic trust that the information is accurate and untampered. Generative AI, however skilled, is prone to errors or biases inherent in its training data 1 Narayanaswamy C.S. Can we write a research paper using artificial intelligence?. J Oral Maxillofac Surg. 2023; 81: 524-526 Abstract Full Text Full Text PDF PubMed Scopus (4) Google Scholar , 2 Dave T. Athaluri S.A. Singh S. ChatGPT in medicine: An overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023; 61169595 Crossref PubMed Scopus (54) Google Scholar , 3 Liebrenz M. Schleifer R. Buadze A. Bhugra D. Smith A. Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. Lancet Digital Health. 2023; 5: e105-e106 Abstract Full Text Full Text PDF PubMed Scopus (131) Google Scholar The thought of a surgical procedure being based on inaccurate AI-generated information is troubling. This poses the question: how do we ensure that AI outputs are genuine and free of harmful biases?
科研通智能强力驱动
Strongly Powered by AbleSci AI