可解释性
计算机科学
人工智能
机器学习
架空(工程)
数据挖掘
质量(理念)
认识论
操作系统
哲学
作者
Junsheng Mu,Michel Kadoch,Tongtong Yuan,Wenzhe Lv,Qiang Liu,Bohan Li
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2024-03-18
卷期号:28 (6): 3206-3218
被引量:2
标识
DOI:10.1109/jbhi.2024.3375894
摘要
Federated learning (FL) enables collaborative training of machine learning models across distributed medical data sources without compromising privacy. However, applying FL to medical image analysis presents challenges like high communication overhead and data heterogeneity. This paper proposes novel FL techniques using explainable artificial intelligence (XAI) for efficient, accurate, and trustworthy analysis. A heterogeneity-aware causal learning approach selectively sparsifies model weights based on their causal contributions, significantly reducing communication requirements while retaining performance and improving interpretability. Furthermore, blockchain provides decentralized quality assessment of client datasets. The assessment scores adjust aggregation weights so higher-quality data has more influence during training, improving model generalization. Comprehensive experiments show our XAI-integrated FL framework enhances efficiency, accuracy and interpretability. The causal learning method decreases communication overhead while maintaining segmentation accuracy. The blockchain-based data valuation mitigates issues from low-quality local datasets. Our framework provides essential model explanations and trust mechanisms, making FL viable for clinical adoption in medical image analysis.
科研通智能强力驱动
Strongly Powered by AbleSci AI