Multi-Scale Transformer Network With Edge-Aware Pre-Training for Cross-Modality MR Image Synthesis

基本事实 人工智能 计算机科学 编码器 模态(人机交互) 自编码 模式识别(心理学) GSM演进的增强数据速率 深度学习 计算机视觉 操作系统
作者
Yonghao Li,Tao Zhou,Kelei He,Yi Zhou,Dinggang Shen
出处
期刊:IEEE Transactions on Medical Imaging [Institute of Electrical and Electronics Engineers]
卷期号:42 (11): 3395-3407 被引量:21
标识
DOI:10.1109/tmi.2023.3288001
摘要

Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones. Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model. However, it is often challenging to obtain sufficient paired data for supervised training. In reality, we often have a small number of paired data while a large number of unpaired data. To take advantage of both paired and unpaired data, in this paper, we propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis. Specifically, an Edge-preserving Masked AutoEncoder (Edge-MAE) is first pre-trained in a self-supervised manner to simultaneously perform 1) image imputation for randomly masked patches in each image and 2) whole edge map estimation, which effectively learns both contextual and structural information. Besides, a novel patch-wise loss is proposed to enhance the performance of Edge-MAE by treating different masked patches differently according to the difficulties of their respective imputations. Based on this proposed pre-training, in the subsequent fine-tuning stage, a Dual-scale Selective Fusion (DSF) module is designed (in our MT-Net) to synthesize missing-modality images by integrating multi-scale features extracted from the encoder of the pre-trained Edge-MAE. Furthermore, this pre-trained encoder is also employed to extract high-level features from the synthesized image and corresponding ground-truth image, which are required to be similar (consistent) in the training. Experimental results show that our MT-Net achieves comparable performance to the competing methods even using 70% of all available paired data. Our code will be released at https://github.com/lyhkevin/MT-Net .

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
momo发布了新的文献求助10
刚刚
min完成签到 ,获得积分10
刚刚
ray发布了新的文献求助10
刚刚
5High_0发布了新的文献求助10
1秒前
大模型应助rossliyi采纳,获得10
1秒前
Liu30完成签到,获得积分10
2秒前
TaiLongYang发布了新的文献求助10
3秒前
好饿呀发布了新的文献求助10
3秒前
4秒前
123完成签到,获得积分10
4秒前
5秒前
taoyanhui发布了新的文献求助10
5秒前
6秒前
6秒前
情怀应助科研通管家采纳,获得10
6秒前
共享精神应助科研通管家采纳,获得10
6秒前
6秒前
Hello应助科研通管家采纳,获得10
6秒前
7秒前
pluto应助科研通管家采纳,获得10
7秒前
爆米花应助科研通管家采纳,获得10
7秒前
Akim应助科研通管家采纳,获得10
7秒前
我是老大应助科研通管家采纳,获得10
7秒前
顾矜应助科研通管家采纳,获得10
7秒前
7秒前
大模型应助科研通管家采纳,获得10
7秒前
7秒前
7秒前
大模型应助科研通管家采纳,获得10
7秒前
彭于晏应助科研通管家采纳,获得10
7秒前
77完成签到 ,获得积分10
7秒前
上官若男应助科研通管家采纳,获得10
7秒前
Ai_niyou应助科研通管家采纳,获得10
7秒前
CodeCraft应助科研通管家采纳,获得10
7秒前
斯文败类应助科研通管家采纳,获得10
8秒前
烟花应助科研通管家采纳,获得10
8秒前
充电宝应助科研通管家采纳,获得10
8秒前
852应助科研通管家采纳,获得10
8秒前
8秒前
领导范儿应助科研通管家采纳,获得10
8秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Modern Epidemiology, Fourth Edition 5000
Digital Twins of Advanced Materials Processing 2000
Weaponeering, Fourth Edition – Two Volume SET 2000
Polymorphism and polytypism in crystals 1000
Signals, Systems, and Signal Processing 610
Discrete-Time Signals and Systems 610
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6023452
求助须知:如何正确求助?哪些是违规求助? 7650975
关于积分的说明 16173207
捐赠科研通 5171995
什么是DOI,文献DOI怎么找? 2767346
邀请新用户注册赠送积分活动 1750690
关于科研通互助平台的介绍 1637238