眼底(子宫)
水准点(测量)
计算机科学
人工智能
视网膜
模式识别(心理学)
医学影像学
光学相干层析成像
计算机视觉
眼科
医学
地图学
地理
作者
Lehan Wang,Chongchong Qi,Chubin Ou,Lin An,Mei Jin,Xiangbin Kong,Xiaomeng Li
标识
DOI:10.1109/tmi.2024.3518067
摘要
Existing multi-modal learning methods on fundus and OCT images mostly require both modalities to be available and strictly paired for training and testing, which appears less practical in clinical scenarios. To expand the scope of clinical applications, we formulate a novel setting, "OCT-enhanced disease recognition from fundus images", that allows for the use of unpaired multi-modal data during the training phase, and relies on the widespread fundus photographs for testing. To benchmark this setting, we present the first large multi-modal multi-class dataset for eye disease diagnosis, MultiEYE, and propose an OCT-assisted Conceptual Distillation Approach (OCT-CoDA), which employs semantically rich concepts to extract disease-related knowledge from OCT images and leverages them into the fundus model. Specifically, we regard the image-concept relation as a link to distill useful knowledge from OCT teacher model to fundus student model, which considerably improves the diagnostic performance based on fundus images and formulates the cross-modal knowledge transfer into an explainable process. Through extensive experiments on the multi-disease classification task, our proposed OCT-CoDA demonstrates remarkable results and interpretability, showing great potential for clinical application. Our dataset and code are available at https://github.com/xmed-lab/MultiEYE.
科研通智能强力驱动
Strongly Powered by AbleSci AI