人工智能
图像分割
分割
计算机科学
分辨率(逻辑)
计算机视觉
基础(证据)
数字化病理学
污渍
任务(项目管理)
模式识别(心理学)
病理
医学
染色
工程类
考古
系统工程
历史
作者
Chong Wang,Yajie Wan,Shuxin Li,Kaili Qu,Xuezhi Zhou,Junjun He,Jing Ke,Yi Yu,Tianyun Wang,Yiqing Shen
标识
DOI:10.1109/tmi.2024.3501352
摘要
Foundation models like the Segment Anything Model (SAM) have shown promising performance in general image segmentation tasks. However, their effectiveness is limited when applied to pathology images due to the inherent multi-scale structural complexity and staining heterogeneity. To address these challenges, we introduce SegAnyPath, a foundational model specifically designed for pathology image segmentation. SegAnyPath is trained on an extensive public pathology dataset comprising over 1.5 million images and 3.5 million masks. We propose a multi-scale proxy task to handle the diverse resolutions in pathology images, complementing the reconstruction objective in the supervised learning stage. To enhance segmentation performance across stain variations, we introduce a novel self-distillation scheme based on stain augmentations. Furthermore, we propose an innovative task-guided Mixture of Experts (MoE) architecture in the decoder of SegAnyPath for efficient management of distinct pathology segmentation tasks, including cell, tissue, and tumor segmentation. Experimental results demonstrate SegAnyPath's zero-shot generalization capability, achieving a Dice score of 0.6797 across multiple datasets and organs while maintaining consistent performance across varying staining styles and resolutions. In comparison, the fine-tuned SAM achieves a Dice score of only 0.5258 on the same external test sets, indicating a substantial 29.27% improvement by SegAnyPath. SegAnyPath has the potential to advance the field of pathology analysis and improve diagnostic accuracy in clinical settings. The code is available at https://github.com/wagnchogn/SegAnyPath.
科研通智能强力驱动
Strongly Powered by AbleSci AI