人工智能
计算机科学
光学相干层析成像
杠杆(统计)
特征(语言学)
判别式
深度学习
青光眼
模式
模式识别(心理学)
医学影像学
特征选择
图像配准
图像融合
计算机视觉
机器学习
图像(数学)
医学
放射科
眼科
社会科学
语言学
哲学
社会学
作者
Yan Wang,Liangli Zhen,Tien‐En Tan,Huazhu Fu,Yangqin Feng,Zizhou Wang,Xinxing Xu,Rick Siow Mong Goh,Yipin Ng,Claire T. Calhoun,Gavin Siew Wei Tan,Jennifer K. Sun,Yong Liu,Daniel Shu Wei Ting
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2024-01-11
卷期号:43 (5): 1945-1957
被引量:3
标识
DOI:10.1109/tmi.2024.3352602
摘要
Color fundus photography (CFP) and Optical coherence tomography (OCT) images are two of the most widely used modalities in the clinical diagnosis and management of retinal diseases. Despite the widespread use of multimodal imaging in clinical practice, few methods for automated diagnosis of eye diseases utilize correlated and complementary information from multiple modalities effectively. This paper explores how to leverage the information from CFP and OCT images to improve the automated diagnosis of retinal diseases. We propose a novel multimodal learning method, named geometric correspondence-based multimodal learning network (GeCoM-Net), to achieve the fusion of CFP and OCT images. Specifically, inspired by clinical observations, we consider the geometric correspondence between the OCT slice and the CFP region to learn the correlated features of the two modalities for robust fusion. Furthermore, we design a new feature selection strategy to extract discriminative OCT representations by automatically selecting the important feature maps from OCT slices. Unlike the existing multimodal learning methods, GeCoM-Net is the first method that formulates the geometric relationships between the OCT slice and the corresponding region of the CFP image explicitly for CFP and OCT fusion. Experiments have been conducted on a large-scale private dataset and a publicly available dataset to evaluate the effectiveness of GeCoM-Net for diagnosing diabetic macular edema (DME), impaired visual acuity (VA) and glaucoma. The empirical results show that our method outperforms the current state-of-the-art multimodal learning methods by improving the AUROC score 0.4%, 1.9% and 2.9% for DME, VA and glaucoma detection, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI