恶性肿瘤
深度学习
人工智能
学习迁移
计算机科学
卷积神经网络
食管
癌症
机器学习
食管癌
医学
外科
病理
内科学
作者
Priti Shaw,Suresh Sankaranarayanan,Pascal Lorenz
标识
DOI:10.1109/iccis56375.2022.9998162
摘要
Esophageal malignancy is a rare form of cancer that starts in the esophagus and spreads to the other parts of the body, impacting a severe risk on the liver, lungs, lymph nodes, and stomach. Studies have shown that esophageal cancer is one of the most prevalent causes of cancer mortality. In 2020, 604100 individuals have been diagnosed with this deadly disease. There are a good number of medical studies, carried out on this topic, every year. A similar focus is also imparted on the AI-based deep learning models for the classification of malignancy. But the challenge is that the AI models are all complex and lack transparency. There is no available information to explain the opacity of such models. And as AI-based medical research seeks reliability, it becomes very important to bring in explainability. So we, through this research, have used Explainable AI(XAI) entitled LIME for creating trust-based models for the early detection of esophageal malignancy. We have used a simple CNN model and several transfer learning-based models, for this study. We have taken the actual endoscopic images from the Kvasir-v2 dataset resulting in an accuracy of 88.75%. with the DenseNet-201 model followed by the usage of an Explainable AI model, Lime, for giving an explanation for the images classified. The deep learning model, combined with explainable AI, helps in getting a clear picture of the regions contributing toward the malignancy prediction and promotes confidence in the model, without the intervention of a domain expert.
科研通智能强力驱动
Strongly Powered by AbleSci AI