生物识别
计算机科学
分割
人工智能
模式识别(心理学)
建筑
语音识别
艺术
视觉艺术
作者
Sajal Kumar Babu Degala,Ravi Prakash Tewari,Pankaj Kamra,K. Uvanesh,Ramesh Pandey
标识
DOI:10.1016/j.compbiomed.2024.109000
摘要
The fetus's health is evaluated with the biometric parameters obtained from the low-resolution ultrasound images. The accuracy of biometric parameters in existing protocols typically depends on conventional image processing approaches and hence, is prone to error. This study introduces the Attention Gate Double U-Net with Guided Decoder (ADU-GD) model specifically crafted for fetal biometric parameter prediction. The attention network and guided decoder are specifically designed to dynamically merge local features with their global dependencies, enhancing the precision of parameter estimation. The ADU-GD displays superior performance with Mean Absolute Error of 0.99 mm and segmentation accuracy of 99.1 % when benchmarked against the well-established models. The proposed model consistently achieved a high Dice index score of about 99.1 ± 0.8, with a minimal Hausdorff distance of about 1.01 ± 1.07 and a low Average Symmetric Surface Distance of about 0.25 ± 0.21, demonstrating the model's excellence. In a comprehensive evaluation, ADU-GD emerged as a frontrunner, outperforming existing deep-learning models such as Double U-Net, DeepLabv3, FCN-32s, PSPNet, SegNet, Trans U-Net, Swin U-Net, Mask-R2CNN, and RDHCformer models in terms of Mean Absolute Error for crucial fetal dimensions, including Head Circumference, Abdomen Circumference, Femur Length, and BiParietal Diameter. It achieved superior accuracy with MAE values of 2.2 mm, 2.6 mm, 0.6 mm, and 1.2 mm, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI