分割
计算机科学
嵌入
人工智能
特征(语言学)
编码(内存)
秩(图论)
模式识别(心理学)
语义学(计算机科学)
市场细分
图像分割
机器学习
数学
组合数学
哲学
业务
语言学
营销
程序设计语言
作者
Jie Liu,Yixiao Zhang,Jie-Neng Chen,Junfei Xiao,Yongyi Lu,Bennett A. Landman,Yixuan Yuan,Alan Yuille,Yucheng Tang,Zongwei Zhou
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:7
标识
DOI:10.48550/arxiv.2301.00785
摘要
An increasing number of public datasets have shown a marked impact on automated organ segmentation and tumor detection. However, due to the small size and partially labeled problem of each dataset, as well as a limited investigation of diverse types of tumors, the resulting models are often limited to segmenting specific organs/tumors and ignore the semantics of anatomical structures, nor can they be extended to novel domains. To address these issues, we propose the CLIP-Driven Universal Model, which incorporates text embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models. This CLIP-based label encoding captures anatomical relationships, enabling the model to learn a structured feature embedding and segment 25 organs and 6 types of tumors. The proposed model is developed from an assembly of 14 datasets, using a total of 3,410 CT scans for training and then evaluated on 6,162 external CT scans from 3 additional datasets. We rank first on the Medical Segmentation Decathlon (MSD) public leaderboard and achieve state-of-the-art results on Beyond The Cranial Vault (BTCV). Additionally, the Universal Model is computationally more efficient (6x faster) compared with dataset-specific models, generalized better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI