可解释性
计算机科学
模式
主题(音乐)
可信赖性
机器学习
人工智能
特征选择
数据挖掘
社会科学
物理
计算机安全
社会学
声学
作者
Yuxing Lu,Rui Peng,Bingheng Jiang,Jinzhuo Wang
标识
DOI:10.1109/bibm58861.2023.10386044
摘要
Omics data are inherently multimodal. The existing multimodal learning methods mainly focus on exploiting complementary information across multiple modalities and integrating them via unified representations. However, few studies have focused on the interpretability of features and modalities and the reliability of results, which are crucial in specific domains such as precision medicine and the life sciences. We propose a Multi-omics Trustworthy Integration Framework (MoTIF) to improve the reliability of multimodal learning models by adding dynamic feature selection and modality selection modules and introducing uncertainty score metrics in the classification process to indicate the reliability of model results, which adhere to our Trustworthy Multimodal Integration (TMI) rule. We conduct exhaustive experiments on five multi-omics datasets derived from TCGA. Results demonstrate that MoTIF can improve the performance of multi-omics classification tasks and provide a more detailed explanation of the model’s internal mechanism and the trustworthiness of the classification results. Code for MoTIF is available at https://github.com/YuxingLu613/MoTIF.
科研通智能强力驱动
Strongly Powered by AbleSci AI