鉴别器
人工智能
计算机科学
卷积神经网络
深度学习
模式识别(心理学)
发电机(电路理论)
预处理器
计算机视觉
量子力学
电信
探测器
物理
功率(物理)
作者
Nan Chen,Zhe Zhang,Jinfeng Pan,Xiaona Li,Weiyi Chen,Guanghua Zhang,Weihua Yang
标识
DOI:10.1016/j.medntd.2023.100267
摘要
This work provides a new multimodal fusion generative adversarial net (GAN) model, Multiple Conditions Transform W-net (MCSTransWnet), which primarily uses femtosecond laser arcuate keratotomy surgical parameters and preoperative corneal topography to predict postoperative corneal topography in astigmatism-corrected patients. The MCSTransWnet model comprises a generator and a discriminator, and the generator is composed of two sub-generators. The first sub-generator extracts features using the U-net model, vision transform (ViT) and a multi-parameter conditional module branch. The second sub-generator uses a U-net network for further image denoising. The discriminator uses the pixel discriminator in Pix2Pix. Currently, most GAN models are convolutional neural networks; however, due to their feature extraction locality, it is difficult to comprehend the relationships among global features. Thus, we added a vision Transform network as the model branch to extract the global features. It is normally difficult to train the transformer, and image noise and geometric information loss are likely. Hence, we adopted the standard U-net fusion scheme and transform network as the generator, so that global features, local features, and rich image details could be obtained simultaneously. Our experimental results clearly demonstrate that MCSTransWnet successfully predicts postoperative corneal topographies (structural similarity = 0.765, peak signal-to-noise ratio = 16.012, and Fréchet inception distance = 9.264). Using this technique to obtain the rough shape of the postoperative corneal topography in advance gives clinicians more references and guides changes to surgical planning and improves the success rate of surgery.
科研通智能强力驱动
Strongly Powered by AbleSci AI