深度学习
基础(证据)
模态(人机交互)
人工智能
计算机科学
癌症
机器学习
医学
内科学
历史
考古
作者
Wen Zhu,Yiwen Chen,Shanling Nie,Hai Yang
标识
DOI:10.1109/bibm58861.2023.10385661
摘要
Cancer survival prediction is pivotal in tailoring individualized treatment strategies and guiding clinician decision-making. Yet, existing methodologies grapple with efficiently harnessing the intricate distribution of medical data spanning various modalities. In response, we present SAMMS, an advanced multi-omics multimodal deep learning framework tailored for survival prediction. SAMMS leverages the robust image segmentation model, "Segment Anything" to adeptly characterize pathological images. This prowess is further enhanced by integrating multi-omics data and clinical insights, facilitating holistic modeling across a diverse modal spectrum. The framework weaves a modality-specific subnetwork with a cross-modality common subnetwork, meticulously capturing intra-modality nuances and inter-modality correlations. SAMMS eclipsed its contemporaries by delivering remarkable performance on TCGA's LGG and KIRC tumor datasets. A battery of analyses underscored SAMMS's unparalleled capability to distill multifaceted insights from multimodal datasets, yielding richer and more integrative multimodal representations. Such strides promise significant advancements in cancer survival analytics, bolstering the precision and efficacy of patient-centric treatments, disease oversight, and clinical decision processes.
科研通智能强力驱动
Strongly Powered by AbleSci AI