Conrad Testagrose,Vikash Gupta,Barbaros S. Erdal,Richard White,Robert W. Maxwell,Xudong Liu,Indika Kahanda,Sherif Elfayoumy,William F. Klostermeyer,Mutlu Demirer
标识
DOI:10.1109/bibm55620.2022.9995206
摘要
Breast density is an indicator of a patient’s predisposed risk of breast cancer. Although not fully understood, increased breast density increases the likelihood of developing breast cancer. Accurate assessment of breast density from mammogram images is a challenging task for the radiologist. A patient’s breast density is assigned to one of four categories outlined by Breast Imaging and Reporting Data Systems (BIRADS). There have been efforts to identify automated approaches to assist radiologists in the classification of a patient’s breast density. The interest in using deep learning to fulfill this need for an automated approach has seen a significant increase in recent years. The preprocessing techniques used to develop these deep learning approaches often have a profound impact on the model’s accuracy and clinical viability. In this paper, we outline a novel image preprocessing technique where we concatenate individual mammogram images and compare the results using this technique between Inception-v3 and a vision transformer (ViT). The results are compared using the area under (AUC) the receiver operator characteristics (ROC) curves and traditional accuracy metrics.