前列腺
参数统计
前列腺癌
磁共振成像
剂量学
癌症检测
医学影像学
放射科
医学物理学
核医学
医学
数学
统计
癌症
内科学
作者
Yuheng Li,Jacob Wynne,Jing Wang,Richard L. J. Qiu,Justin Roper,Shaoyan Pan,Ashesh B. Jani,Tian Liu,Pretesh R. Patel,Hui Mao,Xiaofeng Yang
摘要
Bi-parametric magnetic resonance imaging (bpMRI) has demonstrated promising results in prostate cancer (PCa) detection. Vision transformers have achieved competitive performance compared to convolutional neural network (CNN) in deep learning, but they need abundant annotated data for training. Self-supervised learning can effectively leverage unlabeled data to extract useful semantic representations without annotation and its associated costs. This study proposes a novel self-supervised learning framework and a transformer model to enhance PCa detection using prostate bpMRI. We introduce a novel end-to-end Cross-Shaped windows (CSwin) transformer UNet model, CSwin UNet, to detect clinically significant prostate cancer (csPCa) in prostate bpMRI. We also propose a multitask self-supervised learning framework to leverage unlabeled data and improve network generalizability. Using a large prostate bpMRI dataset (PI-CAI) with 1476 patients, we first pretrain CSwin transformer using multitask self-supervised learning to improve data-efficiency and network generalizability. We then finetune using lesion annotations to perform csPCa detection. We also test the network generalization using a separate bpMRI dataset with 158 patients (Prostate158). Five-fold cross validation shows that self-supervised CSwin UNet achieves 0.888 ± 0.010 aread under receiver operating characterstics curve (AUC) and 0.545 ± 0.060 Average Precision (AP) on PI-CAI dataset, significantly outperforming four comparable models (nnFormer, Swin UNETR, DynUNet, Attention UNet, UNet). On model generalizability, self-supervised CSwin UNet achieves 0.79 AUC and 0.45 AP, still outperforming all other comparable methods and demonstrating good generalization to external data. This study proposes CSwin UNet, a new transformer-based model for end-to-end detection of csPCa, enhanced by self-supervised pretraining to enhance network generalizability. We employ an automatic weighted loss (AWL) to unify pretext tasks, improving representation learning. Evaluated on two multi-institutional public datasets, our method surpasses existing methods in detection metrics and demonstrates good generalization to external data.
科研通智能强力驱动
Strongly Powered by AbleSci AI