计算机科学
人工智能
稳健性(进化)
分割
体素
深度学习
可信赖性
机器学习
登普斯特-沙弗理论
医学影像学
模式识别(心理学)
计算机安全
生物化学
基因
化学
作者
Lucas Fidon,Michaël Aertsen,Florian Kofler,Andrea Bink,Anna L. David,Thomas Deprest,Doaa Emam,Frédéric Guffens,András Jakab,Gregor Kasprian,Patric Kienast,Andrew Melbourne,Bjoern Menze,Nada Mufti,Ivana Pogledić,Daniela Prayer,Marlene Stuempflen,Esther Van Elslander,Sébastien Ourselin,Jan Deprest,Tom Vercauteren
标识
DOI:10.1109/tpami.2023.3346330
摘要
Deep learning models for medical image segmentation can fail unexpectedly and spectacularly for pathological cases and images acquired at different centers than training images, with labeling errors that violate expert knowledge. Such errors undermine the trustworthiness of deep learning models for medical image segmentation. Mechanisms for detecting and correcting such failures are essential for safely translating this technology into clinics and are likely to be a requirement of future regulations on artificial intelligence (AI). In this work, we propose a trustworthy AI theoretical framework and a practical system that can augment any backbone AI system using a fallback method and a fail-safe mechanism based on Dempster-Shafer theory. Our approach relies on an actionable definition of trustworthy AI. Our method automatically discards the voxel-level labeling predicted by the backbone AI that violate expert knowledge and relies on a fallback for those voxels. We demonstrate the effectiveness of the proposed trustworthy AI approach on the largest reported annotated dataset of fetal MRI consisting of 540 manually annotated fetal brain 3D T2w MRIs from 13 centers. Our trustworthy AI method improves the robustness of four backbone AI models for fetal brain MRIs acquired across various centers and for fetuses with various brain abnormalities.
科研通智能强力驱动
Strongly Powered by AbleSci AI