分割
人工智能
特征(语言学)
情态动词
相互信息
计算机科学
模式识别(心理学)
软件可移植性
过程(计算)
光学相干层析成像
计算机视觉
可扩展性
机器学习
眼科
医学
操作系统
哲学
数据库
化学
高分子化学
程序设计语言
语言学
作者
Xin Zhao,Jing Zhang,Qiaozhe Li,Tengfei Zhao,Yi Li,Zifeng Wu
标识
DOI:10.1016/j.patcog.2024.110376
摘要
Research on optical coherence tomography angiography (OCTA) images has received extensive attention in recent years since it provides more detailed information about retinal structures. The automatic segmentation of retinal vessel (RV) has become one of the key issues in the quantification of retinal indicators. To this end, there are various methods proposed with cutting-edge designs and techniques in the literature. However, most of them only learn features from single-modal data, despite the potential relation between data from different modalities. Clinically, 2D projection maps are more convenient for doctors to observe. Nevertheless, 3D volumes preserve the intrinsic retinal structure. We thus propose a novel multi-modal feature mutual learning framework that contains local mutual learning and global mutual learning capturing 2D and 3D information. In the framework, the 3D model and 2D model learn collaboratively and teach each other throughout the training process. Experimental results show that our method outperforms previous deep-learning methods in RV segmentation. The generalization experiments on the ROSE dataset demonstrate the portability and scalability of the proposed framework.
科研通智能强力驱动
Strongly Powered by AbleSci AI