计算机科学
特征(语言学)
人工智能
结核(地质)
纹理合成
图像(数学)
模式识别(心理学)
计算机视觉
图像分割
图像纹理
语言学
生物
哲学
古生物学
作者
Qiuli Wang,Xiaohong Zhang,Wei Zhang,Mingchen Gao,Sheng Huang,Li Wang,Jiuquan Zhang,Dan Yang,Chen Liu
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2021-05-03
卷期号:40 (9): 2343-2353
被引量:19
标识
DOI:10.1109/tmi.2021.3077089
摘要
The important cues for a realistic lung nodule synthesis include the diversity in shape and background, controllability of semantic feature levels, and overall CT image quality. To incorporate these cues as the multiple learning targets, we introduce the Multi-Target Co-Guided Adversarial Mechanism , which utilizes the foreground and background mask to guide nodule shape and lung tissues, takes advantage of the CT lung and mediastinal window as the guidance of spiculation and texture control, respectively. Further, we propose a Multi-Target Co-Guided Synthesizing Network with a joint loss function to realize the co-guidance of image generation and semantic feature learning. The proposed network contains a Mask-Guided Generative Adversarial Sub-Network (MGGAN) and a Window-Guided Semantic Learning Sub-Network (WGSLN). The MGGAN generates the initial synthesis using the mask combined with the foreground and background masks, guiding the generation of nodule shape and background tissues. Meanwhile, the WGSLN controls the semantic features and refines the synthesis quality by transforming the initial synthesis into the CT lung and mediastinal window, and performing the spiculation and texture learning simultaneously. We validated our method using the quantitative analysis of authenticity under the Fréchet Inception Score, and the results show its state-of-the-art performance. We also evaluated our method as a data augmentation method to predict malignancy level on the LIDC-IDRI database, and the results show that the accuracy of VGG-16 is improved by 5.6%. The experimental results confirm the effectiveness of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI