分割
计算机科学
可解释性
模态(人机交互)
口译(哲学)
人工智能
特征(语言学)
模式
可视化
模式识别(心理学)
机器学习
社会科学
语言学
哲学
社会学
程序设计语言
作者
Susu Kang,Zhiyuan Chen,Laquan Li,Wei Lü,X. Qi,Shan Tan
标识
DOI:10.1016/j.asoc.2023.110825
摘要
Accurate tumor segmentation of multi-modality PET/CT images plays a vital role in computer-aided cancer diagnosis and treatment. It is crucial to rationally fuse the complementary information in multi-modality PET/CT segmentation. However, existing methods usually lack interpretability and fail to sufficiently identify and aggregate critical information from different modalities. In this study, we proposed a novel segmentation framework that incorporated an interpretation module into the multi-modality segmentation backbone. The interpretation module highlighted critical features from each modality based on their contributions to the segmentation performance. To provide explicit supervision for the interpretation module, we introduced a novel interpretation loss with two fusion schemes: strengthened fusion and perturbed fusion. The interpretation loss guided the interpretation module to focus on informative features, enhancing its effectiveness in generating meaningful interpretable masks. Under the guidance of the interpretation module, the proposed approach can fully exploit meaningful features from each modality, leading to better integration of multi-modality information and improved segmentation performance. Ablative and comparative experiments were conducted on two PET/CT tumor segmentation datasets. The proposed approach surpassed the baseline by 1.4 and 1.8 Dices on two datasets, respectively, indicating the improvement achieved by the interpretation method. Furthermore, the proposed approach outperformed the best comparison approach by 0.9 and 0.6 Dices on two datasets, respectively. In addition, visualization and perturbation experiments further illustrated the effectiveness of the interpretation method in highlighting critical features.
科研通智能强力驱动
Strongly Powered by AbleSci AI