计算机科学
人工智能
特征提取
像素
最小边界框
模式识别(心理学)
跳跃式监视
加权投票
投票
计算机视觉
特征(语言学)
图像(数学)
语言学
哲学
政治
政治学
法学
作者
Suliman Aladhadh,Rabbia Mahum
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:11: 22283-22296
标识
DOI:10.1109/access.2023.3247502
摘要
To detect knee disease, radiologists have been utilizing multi-view images such as computer tomography (CT) scans, MRIs, and X-rays. The cheapest method is X-ray to attain the images that is used widely. There exist various image processing techniques to detect knee disease at the initial stages; however, there is still room for improved accuracy and precision of the existing algorithms. Furthermore, in machine learning-based techniques, hand-crafted feature extraction mechanism is a tedious task. Therefore, in this paper, we suggest a technique based on customized CenterNet with a pixel-wise voting scheme to extract the features automatically. Our model uses the most representative features due to the best localization results and a weighted pixel-wise voting scheme which takes input from a predicted bounding box using modified CenterNet. It gives a more accurate bounding box based on the voting score from each pixel inside the former box. Moreover, we employed the distillation knowledge concept to make our model simple without increasing its computational cost, and transfer knowledge from a complex network to a simple network. Therefore, our proposed model detects the KOA in knee images precisely and determines its severity level according to the KL grading system such as Grade-I, Grade-II, Grade-III, and Grade-IV. Our proposed model is a robust and improved architecture based on CenterNet utilizing a simple DenseNet-201 as a base network for feature extraction. Due to the dense blocks employed in a base network, most representative features are extracted from the knee samples. We employed two benchmarks i.e. Mendeley VI for the training, and testing, and the OAI dataset for cross-validation. We evaluated the performance of the proposed technique using various experiments and it is estimated that our proposed model outperforms the existing techniques with an accuracy of 99.14% over testing and 98.97% over cross-validation.
科研通智能强力驱动
Strongly Powered by AbleSci AI