人工智能
计算机科学
仿射变换
分割
卷积神经网络
图像配准
体素
刚性变换
薄板样条
地标
模式识别(心理学)
计算机视觉
Sørensen–骰子系数
稳健性(进化)
图像分割
图像(数学)
样条插值
数学
基因
生物化学
化学
纯数学
双线性插值
作者
Xiaokun Liang,Na Li,Zhicheng Zhang,Jing Xiong,S. Kevin Zhou,Yaoqin Xie
标识
DOI:10.1016/j.media.2021.102156
摘要
Automated multi-organ abdominal Computed Tomography (CT) image segmentation can assist the treatment planning, diagnosis, and improve many clinical workflows’ efficiency. The 3-D Convolutional Neural Network (CNN) recently attained state-of-the-art accuracy, which typically relies on supervised training with many manual annotated data. Many methods used the data augmentation strategy with a rigid or affine spatial transformation to alleviate the over-fitting problem and improve the network’s robustness. However, the rigid or affine spatial transformation fails to capture the complex voxel-based deformation in the abdomen, filled with many soft organs. We developed a novel Hybrid Deformable Model (HDM), which consists of the inter-and intra-patient deformation for more effective data augmentation to tackle this issue. The inter-patient deformations were extracted from the learning-based deformable registration between different patients, while the intra-patient deformations were formed using the random 3-D Thin-Plate-Spline (TPS) transformation. Incorporating the HDM enabled the network to capture many of the subtle deformations of abdominal organs. To find a better solution and achieve faster convergence for network training, we fused the pre-trained multi-scale features into the a 3-D attention U-Net. We directly compared the segmentation accuracy of the proposed method to the previous techniques on several centers’ datasets via cross-validation. The proposed method achieves the average Dice Similarity Coefficient (DSC) 0.852, which outperformed the other state-of-the-art on multi-organ abdominal CT segmentation results.
科研通智能强力驱动
Strongly Powered by AbleSci AI