TransMatch: A Transformer-Based Multilevel Dual-Stream Feature Matching Network for Unsupervised Deformable Image Registration

计算机科学 人工智能 图像配准 模式识别(心理学) 特征提取 匹配(统计) 特征(语言学) 体素 计算机视觉 图像(数学) 数学 语言学 统计 哲学
作者
Zeyuan Chen,Yuanjie Zheng,James C. Gee
出处
期刊:IEEE Transactions on Medical Imaging [Institute of Electrical and Electronics Engineers]
卷期号:43 (1): 15-27 被引量:53
标识
DOI:10.1109/tmi.2023.3288136
摘要

Feature matching, which refers to establishing the correspondence of regions between two images (usually voxel features), is a crucial prerequisite of feature-based registration. For deformable image registration tasks, traditional feature-based registration methods typically use an iterative matching strategy for interest region matching, where feature selection and matching are explicit, but specific feature selection schemes are often useful in solving application-specific problems and require several minutes for each registration. In the past few years, the feasibility of learning-based methods, such as VoxelMorph and TransMorph, has been proven, and their performance has been shown to be competitive compared to traditional methods. However, these methods are usually single-stream, where the two images to be registered are concatenated into a 2-channel whole, and then the deformation field is output directly. The transformation of image features into interimage matching relationships is implicit. In this paper, we propose a novel end-to-end dual-stream unsupervised framework, named TransMatch, where each image is fed into a separate stream branch, and each branch performs feature extraction independently. Then, we implement explicit multilevel feature matching between image pairs via the query-key matching idea of the self-attention mechanism in the Transformer model. Comprehensive experiments are conducted on three 3D brain MR datasets, LPBA40, IXI, and OASIS, and the results show that the proposed method achieves state-of-the-art performance in several evaluation metrics compared to the commonly utilized registration methods, including SyN, NiftyReg, VoxelMorph, CycleMorph, ViT-V-Net, and TransMorph, demonstrating the effectiveness of our model in deformable medical image registration.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
1秒前
drleslie发布了新的文献求助10
3秒前
斯文败类应助ding采纳,获得10
3秒前
。。。发布了新的文献求助10
5秒前
不想活了完成签到 ,获得积分10
5秒前
ding应助xl采纳,获得10
5秒前
CZYW发布了新的文献求助10
6秒前
杞人忧天完成签到,获得积分10
6秒前
7秒前
平常安完成签到,获得积分10
9秒前
黑眼圈发布了新的文献求助10
9秒前
娇气的又夏完成签到,获得积分10
11秒前
11秒前
yyyyyyyyyy发布了新的文献求助20
11秒前
14秒前
扶瑶可接发布了新的文献求助10
14秒前
xl发布了新的文献求助10
19秒前
田様应助司马雨泽采纳,获得10
20秒前
yyyyyyyyyy完成签到,获得积分10
20秒前
糊涂的箴完成签到,获得积分10
22秒前
xl完成签到,获得积分10
25秒前
27秒前
Nick完成签到,获得积分0
27秒前
SYLH应助达克赛德采纳,获得10
27秒前
青山落日秋月春风完成签到,获得积分10
28秒前
28秒前
量子星尘发布了新的文献求助10
28秒前
Jasper应助吴小苏采纳,获得10
29秒前
30秒前
31秒前
杭谷波完成签到,获得积分10
32秒前
黑眼圈发布了新的文献求助10
33秒前
CipherSage应助仿生人采纳,获得10
33秒前
XU发布了新的文献求助10
34秒前
yznfly应助糊涂的箴采纳,获得30
34秒前
司马雨泽发布了新的文献求助10
35秒前
39秒前
所所应助科研通管家采纳,获得30
39秒前
英俊的铭应助科研通管家采纳,获得10
39秒前
高分求助中
The Mother of All Tableaux Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 2400
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
A new approach to the extrapolation of accelerated life test data 1000
Cognitive Neuroscience: The Biology of the Mind 1000
Cognitive Neuroscience: The Biology of the Mind (Sixth Edition) 1000
Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications 800
Official Methods of Analysis of AOAC INTERNATIONAL 600
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3959401
求助须知:如何正确求助?哪些是违规求助? 3505622
关于积分的说明 11124998
捐赠科研通 3237410
什么是DOI,文献DOI怎么找? 1789120
邀请新用户注册赠送积分活动 871577
科研通“疑难数据库(出版商)”最低求助积分说明 802844