光学相干层析成像
医学
工作流程
眼底摄影
人工智能
接口
模式
验光服务
计算机科学
数据科学
放射科
眼科
荧光血管造影
社会科学
视力
数据库
社会学
计算机硬件
作者
Gilbert Lim,Kabilan Elangovan,Liyuan Jin
标识
DOI:10.1097/icu.0000000000001089
摘要
Purpose of review Vision Language Models are an emerging paradigm in artificial intelligence that offers the potential to natively analyze both image and textual data simultaneously, within a single model. The fusion of these two modalities is of particular relevance to ophthalmology, which has historically involved specialized imaging techniques such as angiography, optical coherence tomography, and fundus photography, while also interfacing with electronic health records that include free text descriptions. This review then surveys the fast-evolving field of Vision Language Models as they apply to current ophthalmologic research and practice. Recent findings Although models incorporating both image and text data have a long provenance in ophthalmology, effective multimodal Vision Language Models are a recent development exploiting advances in technologies such as transformer and autoencoder models. Summary Vision Language Models offer the potential to assist and streamline the existing clinical workflow in ophthalmology, whether previsit, during, or post-visit. There are, however, also important challenges to be overcome, particularly regarding patient privacy and explainability of model recommendations.
科研通智能强力驱动
Strongly Powered by AbleSci AI