推论
分割
计算机科学
人工智能
滑动窗口协议
试验装置
资源(消歧)
网(多面体)
机器学习
窗口(计算)
数据挖掘
数学
万维网
几何学
计算机网络
作者
Ziyan Huang,Haoyu Wang,Jin Ye,Jingqi Niu,Can Tu,Yuncheng Yang,Shiyi Du,Zhongying Deng,Lixu Gu,Junjun He
标识
DOI:10.1007/978-3-031-23911-3_16
摘要
nnU-Net serves as a good baseline for many medical image segmentation challenges in recent years. It works pretty well for fully-supervised segmentation tasks. However, it is less efficient for inference and cannot effectively make full use of unlabeled data, both of which are vital in real clinical scenarios. To this end, we revisit nnU-Net and find the trade-off between efficiency and accuracy in this framework. Based on the default nnU-Net settings, we design a co-training framework consisting of two strategies to generate high-quality pseudo labels and make efficient inference respectively. Specifically, we first design a resource-intensive nnU-Net to iteratively generate high-quality pseudo labels for unlabeled data. Then we train another light-weight 3D nnU-Net using labeled data and selected unlabeled data, with high-quality pseudo labels used for the latter to achieve efficient segmentation. We conduct experiments on the FLARE22 challenge. Our resource-intensive nnU-Net achieves the mean DSC of 0.9064 on 13 abdominal organ segmentation tasks and ranks first on the validation leaderboard. Our light-weight nnU-Net shows the mean DSC of 0.8773 on the validation leaderboard but it makes a better trade-off between accuracy and efficiency. On the test set, it shows the mean DSC of 0.8864, the mean NSD of 0.9465, and the average inference time of 14.59s and wins the championship of the FLARE22 challenge. Our code is publicly available at https://github.com/Ziyan-Huang/FLARE22 .
科研通智能强力驱动
Strongly Powered by AbleSci AI