分割
计算机科学
人工智能
零(语言学)
注释
弹丸
图像(数学)
跳跃式监视
最小边界框
匹配(统计)
情报检索
计算机视觉
数学
统计
哲学
语言学
化学
有机化学
作者
Zekun Jiang,Dongjie Cheng,Ziyuan Qin,Jun Gao,Qicheng Lao,Abdullaev Bakhrom Ismoilovich,Urazboev Gayrat,Yuldashov Elyorbek,Bekchanov Habibullo,Defu Tang,Linjing Wei,Kang Li,Shouxin Zhang
标识
DOI:10.26599/bdma.2024.9020058
摘要
This study presents a novel multimodal medical image zero-shot segmentation algorithm named the text-visual-prompt segment anything model (TV-SAM) without any manual annotations. The TV-SAM incorporates and integrates the large language model GPT-4, the vision language model GLIP, and the SAM to autonomously generate descriptive text prompts and visual bounding box prompts from medical images, thereby enhancing the SAM's capability for zero-shot segmentation. Comprehensive evaluations are implemented on seven public datasets encompassing eight imaging modalities to demonstrate that TV-SAM can effectively segment unseen targets across various modalities without additional training. TV-SAM significantly outperforms SAM AUTO $(p < 0.01)$ and GSAM $(p < 0.05)$ , closely matching the performance of SAM BBOX with gold standard bounding box prompts $(p=0.07)$ , and surpasses the state-of-the-art methods on specific datasets such as ISIC (0.853 versus 0.802) and WBC (0.968 versus 0.883). The study indicates that TV-SAM serves as an effective multimodal medical image zero-shot segmentation algorithm, highlighting the significant contribution of GPT-4 to zero-shot segmentation. By integrating foundational models such as GPT-4, GLIP, and SAM, the ability to address complex problems in specialized domains can be enhanced.
科研通智能强力驱动
Strongly Powered by AbleSci AI