计算机科学
交叉熵
人工智能
乳腺摄影术
棱锥(几何)
卷积神经网络
模式识别(心理学)
最大值和最小值
机器学习
数据挖掘
乳腺癌
数学
数学分析
内科学
癌症
医学
几何学
作者
Yanfei Zhong,Yan Hong Piao,Guohui Zhang
标识
DOI:10.1088/1361-6560/ad02d7
摘要
Abstract Object: Breast density is an important indicator of breast cancer risk. However, existing methods for breast density classification do not fully utilise the multi-view information produced by mammography and thus have limited classification accuracy.
 Method: In this paper, we propose a multi-view fusion network, denoted local-global dynamic pyramidal-convolution transformer network (LG-DPTNet), for breast density classification in mammography. First, for single-view feature extraction, we develop a dynamic pyramid convolutional network to enable the network to adaptively learn global and local features. Second, we address the problem exhibited by traditional multi-view fusion methods, this is based on a cross-transformer that integrates fine-grained information and global contextual information from different views and thereby provides accurate predictions for the network. Finally, we use an asymmetric focal loss function instead of traditional cross-entropy loss during network training to solve the problem of class imbalance in public datasets, thereby further improving the performance of the model.
 Results: We evaluated the effectiveness of our method on two publicly available mammography datasets, CBIS-DDSM and INbreast, and achieved areas under the curve (AUC) of 96.73% and 91.12%, respectively.
 Conclusion: Our experiments demonstrated that the devised fusion model can more effectively utilise the information contained in multiple views than existing models and exhibits classification performance that is superior to that of baseline and state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI