人工智能
计算机科学
计算机视觉
基本事实
帧(网络)
跟踪(教育)
分割
帧速率
深度学习
模式识别(心理学)
心理学
教育学
电信
作者
Xianjin Dai,Yang Lei,Justin Roper,Yue Chen,Jeffrey D. Bradley,Walter J. Curran,Tian Liu,Xiaofeng Yang
摘要
Abstract Purpose Ultrasound (US) imaging is an established imaging modality capable of offering video‐rate volumetric images without ionizing radiation. It has the potential for intra‐fraction motion tracking in radiation therapy. In this study, a deep learning‐based method has been developed to tackle the challenges in motion tracking using US imaging. Methods We present a Markov‐like network, which is implemented via generative adversarial networks, to extract features from sequential US frames (one tracked frame followed by untracked frames) and thereby estimate a set of deformation vector fields (DVFs) through the registration of the tracked frame and the untracked frames. The positions of the landmarks in the untracked frames are finally determined by shifting landmarks in the tracked frame according to the estimated DVFs. The performance of the proposed method was evaluated on the testing dataset by calculating the tracking error (TE) between the predicted and ground truth landmarks on each frame. Results The proposed method was evaluated using the MICCAI CLUST 2015 dataset which was collected using seven US scanners with eight types of transducers and the Cardiac Acquisitions for Multi‐structure Ultrasound Segmentation (CAMUS) dataset which was acquired using GE Vivid E95 ultrasound scanners. The CLUST dataset contains 63 2D and 22 3D US image sequences respectively from 42 and 18 subjects, and the CAMUS dataset includes 2D US images from 450 patients. On CLUST dataset, our proposed method achieved a mean tracking error of 0.70 ± 0.38 mm for the 2D sequences and 1.71 ± 0.84 mm for the 3D sequences for those public available annotations. And on CAMUS dataset, a mean tracking error of 0.54 ± 1.24 mm for the landmarks in the left atrium was achieved. Conclusions A novel motion tracking algorithm using US images based on modern deep learning techniques has been demonstrated in this study. The proposed method can offer millimeter‐level tumor motion prediction in real time, which has the potential to be adopted into routine tumor motion management in radiation therapy.
科研通智能强力驱动
Strongly Powered by AbleSci AI