计算机科学
人工智能
深度学习
帧(网络)
背景(考古学)
计算机视觉
灵活性(工程)
一致性(知识库)
概化理论
相似性(几何)
机器学习
图像(数学)
数学
电信
古生物学
统计
生物
作者
Mingyuan Luo,Xin Yang,Shouxin Zhang,Haoran Dou,Xindi Hu,Yuhao Huang,Nishant Ravikumar,Songcheng Xu,Yuanji Zhang,Yi Xiong,Wufeng Xue,Alejandro F. Frangi,Dong Ni,Litao Sun
标识
DOI:10.1016/j.media.2023.102810
摘要
Sensorless freehand 3D ultrasound (US) reconstruction based on deep networks shows promising advantages, such as large field of view, relatively high resolution, low cost, and ease of use. However, existing methods mainly consider vanilla scan strategies with limited inter-frame variations. These methods thus are degraded on complex but routine scan sequences in clinics. In this context, we propose a novel online learning framework for freehand 3D US reconstruction under complex scan strategies with diverse scanning velocities and poses. First, we devise a motion-weighted training loss in training phase to regularize the scan variation frame-by-frame and better mitigate the negative effects of uneven inter-frame velocity. Second, we effectively drive online learning with local-to-global pseudo supervisions. It mines both the frame-level contextual consistency and the path-level similarity constraint to improve the inter-frame transformation estimation. We explore a global adversarial shape before transferring the latent anatomical prior as supervision. Third, we build a feasible differentiable reconstruction approximation to enable the end-to-end optimization of our online learning. Experimental results illustrate that our freehand 3D US reconstruction framework outperformed current methods on two large, simulated datasets and one real dataset. In addition, we applied the proposed framework to clinical scan videos to further validate its effectiveness and generalizability.
科研通智能强力驱动
Strongly Powered by AbleSci AI