可解释性
计算机科学
卷积神经网络
人工智能
脑瘤
鉴定(生物学)
磁共振成像
机器学习
透明度(行为)
过程(计算)
模式识别(心理学)
医学
放射科
病理
植物
计算机安全
生物
操作系统
作者
Md. Ariful Islam,M. F. Mridha,Mejdl Safran,Sultan Alfarhood,Md. Mohsin Kabir
摘要
ABSTRACT Due to the complex structure of the brain, variations in tumor shapes and sizes, and the resemblance between tumor and healthy tissues, the reliable and efficient identification of brain tumors through magnetic resonance imaging (MRI) presents a persistent challenge. Given that manual identification of tumors is often time‐consuming and prone to errors, there is a clear need for advanced automated procedures to enhance detection accuracy and efficiency. Our study addresses the difficulty by creating an improved convolutional neural network (CNN) framework derived from DenseNet121 to augment the accuracy of brain tumor detection. The proposed model was comprehensively evaluated against 12 baseline CNN models and 5 state‐of‐the‐art architectures, namely Vision Transformer (ViT), ConvNeXt, MobileNetV3, FastViT, and InternImage. The proposed model achieved exceptional accuracy rates of 98.4% and 99.3% on two separate datasets, outperforming all 17 models evaluated. Our improved model was integrated using Explainable AI (XAI) techniques, particularly Grad‐CAM++, facilitating accurate diagnosis and localization of complex tumor instances, including small metastatic lesions and nonenhancing low‐grade gliomas. The XAI framework distinctly highlights essential areas signifying tumor presence, hence enhancing the model's accuracy and interpretability. The results highlight the potential of our method as a reliable diagnostic instrument for healthcare practitioners' ability to comprehend and confirm artificial intelligence (AI)‐driven predictions but also bring transparency to the model's decision‐making process, ultimately improving patient outcomes. This advancement signifies a significant progression in the use of AI in neuro‐oncology, enhancing diagnostic interpretability and precision.
科研通智能强力驱动
Strongly Powered by AbleSci AI