融合基因
图形
基因表达
计算生物学
病理
基因
生物
计算机科学
医学
遗传学
理论计算机科学
作者
Yi Zheng,R. Conrad,Emily Green,Eric Burks,Margrit Betke,Jennifer Beane,Vijaya B. Kolachalama
标识
DOI:10.1101/2023.10.26.564236
摘要
Abstract Multimodal machine learning models are being developed to analyze pathology images and other modalities, such as gene expression, to gain clinical and biological in-sights. However, most frameworks for multimodal data fusion do not fully account for the interactions between different modalities. Here, we present an attention-based fusion architecture that integrates a graph representation of pathology images with gene expression data and concomitantly learns from the fused information to predict patient-specific survival. In our approach, pathology images are represented as undirected graphs, and their embeddings are combined with embeddings of gene expression signatures using an attention mechanism to stratify tumors by patient survival. We show that our framework improves the survival prediction of human non-small cell lung cancers, out-performing existing state-of-the-art approaches that lever-age multimodal data. Our framework can facilitate spatial molecular profiling to identify tumor heterogeneity using pathology images and gene expression data, complementing results obtained from more expensive spatial transcriptomic and proteomic technologies.
科研通智能强力驱动
Strongly Powered by AbleSci AI