医学
计算机科学
数据科学
人工智能
工程伦理学
工程类
作者
Xiaoye Michael Wang,Ni Zhang,Hongyu He,Trang Nguyen,Kun‐Hsing Yu,Hao Deng,Cynthia Brandt,Danielle S. Bitterman,Ling Pan,Ching‐Yu Cheng,James Zou,Dianbo Liu
出处
期刊:Cornell University - arXiv
日期:2024-09-11
标识
DOI:10.48550/arxiv.2409.18968
摘要
Recent advancements in artificial intelligence (AI), particularly in deep learning and large language models (LLMs), have accelerated their integration into medicine. However, these developments have also raised public concerns about the safe application of AI. In healthcare, these concerns are especially pertinent, as the ethical and secure deployment of AI is crucial for protecting patient health and privacy. This review examines potential risks in AI practices that may compromise safety in medicine, including reduced performance across diverse populations, inconsistent operational stability, the need for high-quality data for effective model tuning, and the risk of data breaches during model development and deployment. For medical practitioners, patients, and researchers, LLMs provide a convenient way to interact with AI and data through language. However, their emergence has also amplified safety concerns, particularly due to issues like hallucination. Second part of this article explores safety issues specific to LLMs in medical contexts, including limitations in processing complex logic, challenges in aligning AI objectives with human values, the illusion of understanding, and concerns about diversity. Thoughtful development of safe AI could accelerate its adoption in real-world medical settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI