模式
模态(人机交互)
计算机科学
情态动词
人工智能
特征(语言学)
机器学习
任务(项目管理)
分割
代表(政治)
自然语言处理
模式识别(心理学)
高分子化学
社会科学
语言学
化学
哲学
管理
社会学
政治
政治学
法学
经济
作者
Hu Wang,Congbo Ma,Jianpeng Zhang,Yuan Zhang,Jodie Avery,M. Louise Hull,Gustavo Carneiro
标识
DOI:10.1007/978-3-031-43901-8_21
摘要
The problem of missing modalities is both critical and non-trivial to be handled in multi-modal models. It is common for multi-modal tasks that certain modalities contribute more compared to other modalities, and if those important modalities are missing, the model performance drops significantly. Such fact remains unexplored by current multi-modal approaches that recover the representation from missing modalities by feature reconstruction or blind feature aggregation from other modalities, instead of extracting useful information from the best performing modalities. In this paper, we propose a Learnable Cross-modal Knowledge Distillation (LCKD) model to adaptively identify important modalities and distil knowledge from them to help other modalities from the cross-modal perspective for solving the missing modality issue. Our approach introduces a teacher election procedure to select the most "qualified" teachers based on their single modality performance on certain tasks. Then, cross-modal knowledge distillation is performed between teacher and student modalities for each task to push the model parameters to a point that is beneficial for all tasks. Hence, even if the teacher modalities for certain tasks are missing during testing, the available student modalities can accomplish the task well enough based on the learned knowledge from their automatically elected teacher modalities. Experiments on the Brain Tumour Segmentation Dataset 2018 (BraTS2018) shows that LCKD outperforms other methods by a considerable margin, improving the state-of-the-art performance by 3.61% for enhancing tumour, 5.99% for tumour core, and 3.76% for whole tumour in terms of segmentation Dice score.
科研通智能强力驱动
Strongly Powered by AbleSci AI