计算机科学
人工智能
成像体模
模式识别(心理学)
视图合成
工件(错误)
计算机视觉
核医学
医学
渲染(计算机图形)
作者
Yiwen Zhang,Chuanpu Li,Zhenhui Dai,Liming Zhong,Xuetao Wang,Wei Yang
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2023-02-22
卷期号:42 (8): 2313-2324
被引量:3
标识
DOI:10.1109/tmi.2023.3247759
摘要
Adaptive radiation therapy (ART) aims to deliver radiotherapy accurately and precisely in the presence of anatomical changes, in which the synthesis of computed tomography (CT) from cone-beam CT (CBCT) is an important step. However, because of serious motion artifacts, CBCT-to-CT synthesis remains a challenging task for breast-cancer ART. Existing synthesis methods usually ignore motion artifacts, thereby limiting their performance on chest CBCT images. In this paper, we decompose CBCT-to-CT synthesis into artifact reduction and intensity correction, and we introduce breath-hold CBCT images to guide them. To achieve superior synthesis performance, we propose a multimodal unsupervised representation disentanglement (MURD) learning framework that disentangles the content, style, and artifact representations from CBCT and CT images in the latent space. MURD can synthesize different forms of images using the recombination of disentangled representations. Also, we propose a multipath consistency loss to improve structural consistency in synthesis and a multidomain generator to improve synthesis performance. Experiments on our breast-cancer dataset show that MURD achieves impressive performance with a mean absolute error of 55.23±9.94 HU, a structural similarity index measurement of 0.721±0.042, and a peak signal-to-noise ratio of 28.26±1.93 dB in synthetic CT. The results show that compared to state-of-the-art unsupervised synthesis methods, our method produces better synthetic CT images in terms of both accuracy and visual quality.
科研通智能强力驱动
Strongly Powered by AbleSci AI