计算机科学
分割
人工智能
冠状面
计算机视觉
部分容积
特征(语言学)
空间分析
模式识别(心理学)
放射科
医学
数学
语言学
统计
哲学
作者
Zhaoshuo Diao,Huiyan Jiang,Tianyu Shi
标识
DOI:10.1016/j.engappai.2023.105955
摘要
Tumor segmentation is a key step in computer-aided diagnosis. The PET–CT co-segmentation method combines the high sensitivity of PET images and the anatomical information of CT images. For whole-body multiple tumors, such as soft tissue sarcoma, lymphoma, etc., due to the different lesion location and size, it is necessary to segment the tumor area according to the whole body anatomical information. How to effectively leverage whole-body contextual information and the fusion of multimodal information is the key to the problem. To address this issue, we propose a spatial squeeze and multimodal feature fusion attention network for whole-body multiple tumors segmentation based on PET–CT volumes. Our proposed method consists of two parts, a Coronal-Spatial Squeeze Attention Extraction Network (CSAE-Net) and a Precise PET–CT Fusion Attention Segmentation Network (PFAS-Net), respectively. In CSAE-Net, we squeeze a 3D PET–CT volume along the coronal plane into m 2D images, and obtain 3D Coronal Spatial Squeeze Attention Volume based on these 2D images. In PFAS-Net, the input is a 2D axial PET–CT slice, and the previously obtained coronal spatial squeeze attention map is used to guide the segmentation. Moreover, a Multimodal Fusion Attention (MFA) module is proposed to fuse the metabolic information of PET and the anatomical information of CT. We perform experiments on PET–CT datasets of two whole-body multiple tumors, Soft Tissue Sarcoma (STS) and Lymphoma. The results show that our proposed method improved Dice values by 8.03% in STS and 1.74% in Lymphoma. Also the visualization results show that our proposed method is able to suppress high-uptake regions of normal tissues.
科研通智能强力驱动
Strongly Powered by AbleSci AI