计算机科学
工作流程
人工智能
可解释性
软件部署
风险分析(工程)
数据科学
软件工程
医学
数据库
作者
J Ayorinde,Federica Citterio,Matteo Landrò,Elia Peruzzo,Tuba Islam,Simon Tilley,G. N. Taylor,Victoria Bardsley,Píetro Lió,Alexander Samoshkin,Gavin J. Pettigrew
出处
期刊:Journal of The American Society of Nephrology
日期:2022-11-09
卷期号:33 (12): 2133-2140
被引量:10
标识
DOI:10.1681/asn.2022010069
摘要
Although still in its infancy, artificial intelligence (AI) analysis of kidney biopsy images is anticipated to become an integral aspect of renal histopathology. As these systems are developed, the focus will understandably be on developing ever more accurate models, but successful translation to the clinic will also depend upon other characteristics of the system. In the extreme, deployment of highly performant but “black box” AI is fraught with risk, and high-profile errors could damage future trust in the technology. Furthermore, a major factor determining whether new systems are adopted in clinical settings is whether they are “trusted” by clinicians. Key to unlocking trust will be designing platforms optimized for intuitive human-AI interactions and ensuring that, where judgment is required to resolve ambiguous areas of assessment, the workings of the AI image classifier are understandable to the human observer. Therefore, determining the optimal design for AI systems depends on factors beyond performance, with considerations of goals, interpretability, and safety constraining many design and engineering choices. In this article, we explore challenges that arise in the application of AI to renal histopathology, and consider areas where choices around model architecture, training strategy, and workflow design may be influenced by factors beyond the final performance metrics of the system.
科研通智能强力驱动
Strongly Powered by AbleSci AI