人工智能
医学
射线照相术
卷积神经网络
深度学习
放射科
股骨骨折
机器学习
骨质疏松症
接收机工作特性
电子健康档案
股骨
医学物理学
医疗保健
计算机科学
外科
病理
内科学
经济
经济增长
作者
Jörg Schilcher,Alva Nilsson,Oliver Andlid,Anders Eklund
标识
DOI:10.1016/j.compbiomed.2023.107704
摘要
Atypical femur fractures (AFF) represent a very rare type of fracture that can be difficult to discriminate radiologically from normal femur fractures (NFF). AFFs are associated with drugs that are administered to prevent osteoporosis-related fragility fractures, which are highly prevalent in the elderly population. Given that these fractures are rare and the radiologic changes are subtle currently only 7% of AFFs are correctly identified, which hinders adequate treatment for most patients with AFF. Deep learning models could be trained to classify automatically a fracture as AFF or NFF, thereby assisting radiologists in detecting these rare fractures. Historically, for this classification task, only imaging data have been used, using convolutional neural networks (CNN) or vision transformers applied to radiographs. However, to mimic situations in which all available data are used to arrive at a diagnosis, we adopted an approach of deep learning that is based on the integration of image data and tabular data (from electronic health records) for 159 patients with AFF and 914 patients with NFF. We hypothesized that the combinatorial data, compiled from all the radiology departments of 72 hospitals in Sweden and the Swedish National Patient Register, would improve classification accuracy, as compared to using only one modality. At the patient level, the area under the ROC curve (AUC) increased from 0.966 to 0.987 when using the integrated set of imaging data and seven pre-selected variables, as compared to only using imaging data. More importantly, the sensitivity increased from 0.796 to 0.903. We found a greater impact of data fusion when only a randomly selected subset of available images was used to make the image and tabular data more balanced for each patient. The AUC then increased from 0.949 to 0.984, and the sensitivity increased from 0.727 to 0.849. These AUC improvements are not large, mainly because of the already excellent performance of the CNN (AUC of 0.966) when only images are used. However, the improvement is clinically highly relevant considering the importance of accuracy in medical diagnostics. We expect an even greater effect when imaging data from a clinical workflow, comprising a more diverse set of diagnostic images, are used.
科研通智能强力驱动
Strongly Powered by AbleSci AI