计算机科学
杠杆(统计)
联合学习
模态(人机交互)
情态动词
模式
机器学习
人工智能
火车
原始数据
特征(语言学)
桥接(联网)
数据挖掘
计算机安全
高分子化学
社会科学
语言学
化学
哲学
地图学
社会学
程序设计语言
地理
作者
Yuanzhe Peng,Jieming Bian,Jie Xu
标识
DOI:10.1109/icassp48485.2024.10448255
摘要
The fusion of complementary multimodal information is crucial in computational pathology for accurate diagnostics. However, existing multimodal learning approaches necessitate access to users' raw data, posing substantial privacy risks. While Federated Learning (FL) serves as a privacy-preserving alternative, it falls short in addressing the challenges posed by heterogeneous (yet possibly overlapped) modalities data across various hospitals. To bridge this gap, we propose a Federated Multi-Modal (FedMM) learning framework that federatedly trains multiple single-modal feature extractors to enhance subsequent classification performance instead of existing FL that aims to train a unified multimodal fusion model. Any participating hospital, even with small-scale datasets or limited devices, can leverage these federated trained extractors to perform local downstream tasks (e.g., classification) while ensuring data privacy. Through comprehensive evaluations of two publicly available datasets, we demonstrate that FedMM notably outperforms two baselines in accuracy and AUC metrics.
科研通智能强力驱动
Strongly Powered by AbleSci AI