人工智能
计算机科学
模式识别(心理学)
机器学习
作者
Yutong Xie,Jianpeng Zhang,Yong Xia,Qi Wu
标识
DOI:10.1109/tpami.2024.3436105
摘要
Self-supervised learning (SSL) opens up huge opportunities for medical image analysis that is well known for its lack of annotations. However, aggregating massive (unlabeled) 3D medical images like computerized tomography (CT) remains challenging due to its high imaging cost and privacy restrictions. In our pilot study, we advocated bringing a wealth of 2D images like chest X-rays as compensation for the lack of 3D data, aiming to build a universal medical self-supervised representation learning framework, called UniMiSS. Especially, we designed a pyramid U- like medical Transformer (MiT) as the backbone to make UniMiSS possible to perform SSL with both 2D and 3D images. Consequently, the predecessor UniMiSS has two obvious merits compared to current 3D-specific SSL: (1) more effective - superior to learning strong representations, benefiting from more and diverse data; and (2) more versatile - suitable for various downstream tasks without the restriction on the dimensionality barrier. Unfortunately, UniMiSS did not dig deeply into the intrinsic anatomy correlation between 2D medical images and 3D volumes due to the lack of paired multi-modal/dimension patient data. In this extension paper, we propose the UniMiSS+, in which we introduce the digitally reconstructed radiographs (DRR) technology to simulate X-ray images from a CT volume to access paired CT and X-ray data. Benefiting from the paired group, we introduce an extra pair- wise constraint to boost the cross-modality correlation learning, which also can be adopted as a cross-dimension regularization to further improve the representations. We conduct expensive experiments on multiple 3D/2D medical image analysis tasks, including segmentation and classification. The results show that the proposed UniMiSS+ achieves promising performance on various downstream tasks, not only outperforming the ImageNet pre-training and other advanced SSL counterparts substantially but also improving the predecessor UniMiSS pre-training. Code is available at: https://github.com/YtongXie/UniMiSS-code.
科研通智能强力驱动
Strongly Powered by AbleSci AI