地标
计算机科学
人工智能
面子(社会学概念)
手术计划
颅面
模式识别(心理学)
计算机视觉
医学
社会科学
精神科
社会学
放射科
作者
Jiahao Bao,X. Zhang,Shuguang Xiang,Hao Liu,Ming Cheng,Yang Yang,Xiaolin Huang,Wei Xiang,Wenpeng Cui,Hong Lai,Shuo Huang,Yan Wang,Dianwei Qian,Hong Yu
标识
DOI:10.1177/00220345241253186
摘要
The increasing application of virtual surgical planning (VSP) in orthognathic surgery implies a critical need for accurate prediction of facial and skeletal shapes. The craniofacial relationship in patients with dentofacial deformities is still not understood, and transformations between facial and skeletal shapes remain a challenging task due to intricate anatomical structures and nonlinear relationships between the facial soft tissue and bones. In this study, a novel bidirectional 3-dimensional (3D) deep learning framework, named P2P-ConvGC, was developed and validated based on a large-scale data set for accurate subject-specific transformations between facial and skeletal shapes. Specifically, the 2-stage point-sampling strategy was used to generate multiple nonoverlapping point subsets to represent high-resolution facial and skeletal shapes. Facial and skeletal point subsets were separately input into the prediction system to predict the corresponding skeletal and facial point subsets via the skeletal prediction subnetwork and facial prediction subnetwork. For quantitative evaluation, the accuracy was calculated with shape errors and landmark errors between the predicted skeleton or face with corresponding ground truths. The shape error was calculated by comparing the predicted point sets with the ground truths, with P2P-ConvGC outperforming existing state-of-the-art algorithms including P2P-Net, P2P-ASNL, and P2P-Conv. The total landmark errors (Euclidean distances of craniomaxillofacial landmarks) of P2P-ConvGC in the upper skull, mandible, and facial soft tissues were 1.964 ± 0.904 mm, 2.398 ± 1.174 mm, and 2.226 ± 0.774 mm, respectively. Furthermore, the clinical feasibility of the bidirectional model was validated using a clinical cohort. The result demonstrated its prediction ability with average surface deviation errors of 0.895 ± 0.175 mm for facial prediction and 0.906 ± 0.082 mm for skeletal prediction. To conclude, our proposed model achieved good performance on the subject-specific prediction of facial and skeletal shapes and showed clinical application potential in postoperative facial prediction and VSP for orthognathic surgery.
科研通智能强力驱动
Strongly Powered by AbleSci AI