光学相干层析成像
计算机科学
人工智能
深度学习
眼底(子宫)
糖尿病性视网膜病变
模式识别(心理学)
医学影像学
情态动词
机器学习
监督学习
视网膜
医学
人工神经网络
眼科
内分泌学
化学
高分子化学
糖尿病
作者
Olle Holmberg,Niklas Köhler,Thiago Gonçalves dos Santos Martins,Jakob Siedlecki,Tina Herold,Leonie Keidel,Ben Asani,Johannes Schiefelbein,Siegfried Priglinger,Karsten Kortuem,Fabian J. Theis
标识
DOI:10.1038/s42256-020-00247-1
摘要
Access to large, annotated samples represents a considerable challenge for training accurate deep-learning models in medical imaging. Although at present transfer learning from pre-trained models can help with cases lacking data, this limits design choices and generally results in the use of unnecessarily large models. Here we propose a self-supervised training scheme for obtaining high-quality, pre-trained networks from unlabelled, cross-modal medical imaging data, which will allow the creation of accurate and efficient models. We demonstrate the utility of the scheme by accurately predicting retinal thickness measurements based on optical coherence tomography from simple infrared fundus images. Subsequently, learned representations outperformed advanced classifiers on a separate diabetic retinopathy classification task in a scenario of scarce training data. Our cross-modal, three-stage scheme effectively replaced 26,343 diabetic retinopathy annotations with 1,009 semantic segmentations on optical coherence tomography and reached the same classification accuracy using only 25% of fundus images, without any drawbacks, since optical coherence tomography is not required for predictions. We expect this concept to apply to other multimodal clinical imaging, health records and genomics data, and to corresponding sample-starved learning problems. The thickness of the retina is an important medical indicator for diabetic retinopathy. Holmberg and colleagues present a self-supervised deep-learning method that uses cross-modal data to predict retinal thickness maps from easily obtainable fundus images.
科研通智能强力驱动
Strongly Powered by AbleSci AI