计算机科学
数字化病理学
人工智能
自然语言处理
组织病理学
杠杆(统计)
任务(项目管理)
医学诊断
病理
医学
管理
经济
作者
Ming Y. Lu,Bowen Chen,Drew F. K. Williamson,Richard J. Chen,Ivy Liang,Tong Ding,Guillaume Jaume,Igor Odintsov,J. Andrew Zhang,Long P. Le,Georg K. Gerber,Anil V. Parwani,Faisal Mahmood
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:8
标识
DOI:10.48550/arxiv.2307.12914
摘要
The accelerated adoption of digital pathology and advances in deep learning have enabled the development of powerful models for various pathology tasks across a diverse array of diseases and patient cohorts. However, model training is often difficult due to label scarcity in the medical domain and the model's usage is limited by the specific task and disease for which it is trained. Additionally, most models in histopathology leverage only image data, a stark contrast to how humans teach each other and reason about histopathologic entities. We introduce CONtrastive learning from Captions for Histopathology (CONCH), a visual-language foundation model developed using diverse sources of histopathology images, biomedical text, and notably over 1.17 million image-caption pairs via task-agnostic pretraining. Evaluated on a suite of 13 diverse benchmarks, CONCH can be transferred to a wide range of downstream tasks involving either or both histopathology images and text, achieving state-of-the-art performance on histology image classification, segmentation, captioning, text-to-image and image-to-text retrieval. CONCH represents a substantial leap over concurrent visual-language pretrained systems for histopathology, with the potential to directly facilitate a wide array of machine learning-based workflows requiring minimal or no further supervised fine-tuning.
科研通智能强力驱动
Strongly Powered by AbleSci AI