计算机科学
卷积神经网络
标杆管理
人工智能
变压器
机器学习
医学影像学
判别式
水准点(测量)
Boosting(机器学习)
深度学习
模式识别(心理学)
物理
大地测量学
营销
电压
量子力学
业务
地理
作者
DongAo Ma,Mohammad Reza Hosseinzadeh Taher,Jiaxuan Pang,Nahid UI Islam,Fatemeh Haghighi,Michael B. Gotway,Jianming Liang
标识
DOI:10.1007/978-3-031-16852-9_2
摘要
Visual transformers have recently gained popularity in the computer vision community as they began to outrank convolutional neural networks (CNNs) in one representative visual benchmark after another. However, the competition between visual transformers and CNNs in medical imaging is rarely studied, leaving many important questions unanswered. As the first step, we benchmark how well existing transformer variants that use various (supervised and self-supervised) pre-training methods perform against CNNs on a variety of medical classification tasks. Furthermore, given the data-hungry nature of transformers and the annotation-deficiency challenge of medical imaging, we present a practical approach for bridging the domain gap between photographic and medical images by utilizing unlabeled large-scale in-domain data. Our extensive empirical evaluations reveal the following insights in medical imaging: (1) good initialization is more crucial for transformer-based models than for CNNs, (2) self-supervised learning based on masked image modeling captures more generalizable representations than supervised models, and (3) assembling a larger-scale domain-specific dataset can better bridge the domain gap between photographic and medical images via self-supervised continuous pre-training. We hope this benchmark study can direct future research on applying transformers to medical imaging analysis. All codes and pre-trained models are available on our GitHub page https://github.com/JLiangLab/BenchmarkTransformers.
科研通智能强力驱动
Strongly Powered by AbleSci AI