雅卡索引
分割
计算机科学
人工智能
公制(单位)
深度学习
图像分割
班级(哲学)
相似性(几何)
图像(数学)
掷骰子
召回
机器学习
模式识别(心理学)
数学
工程类
语言学
运营管理
哲学
几何学
作者
Kaiyue Wang,Sixing Yin,Yining Wang,Shufang Li
标识
DOI:10.1145/3590003.3590040
摘要
Medical image segmentation is crucial for facilitating pathology assessment, ensuring reliable diagnosis and monitoring disease progression. Deep-learning models have been extensively applied in automating medical image analysis to reduce human effort. However, the non-transparency of deep-learning models limits their clinical practicality due to the unaffordably high risk of misdiagnosis resulted from the misleading model output. In this paper, we propose a explainability metric as part of the loss function. The proposed explainability metric comes from Class Activation Map(CAM) with learnable weights such that the model can be optimized to achieve desirable balance between segmentation performance and explainability. Experiments found that the proposed model visibly heightened Dice score from to , Jaccard similarity from to and Recall from to respectively compared with U-net. In addition, results make clear that the drawn model outdistances the conventional U-net in terms of explainability performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI