计算机科学
可解释性
乳腺超声检查
人工智能
工作量
乳腺癌
乳房成像
医学影像学
医学物理学
乳腺摄影术
机器学习
医学
癌症
内科学
操作系统
作者
Qinghua Huang,Dan Wang,Zhenkun Lu,Shichong Zhou,Jia Li,Longzhong Liu,Cai Chang
标识
DOI:10.1016/j.eswa.2023.120450
摘要
Breast cancer is one of the most vulnerable malignant tumors for women in the world, which seriously threatens women's life and health. Breast ultrasound imaging technology is widely used in clinical breast cancer detection due to its advantages of high safety, real-time and convenience. However, breast ultrasound diagnosis relies on experienced diagnostic medical sonographers to read the images and requires high-quality imaging of ultrasound images. On the one hand, reading the breast ultrasound images manually by the doctors is time-consuming and burdensome. On the other hand, cultivating an ultrasonographer is a costly process. The development of computer-aided diagnostic systems in recent years is a solution to these problems to some extent. More and more computer-aided diagnostic systems for breast ultrasound are being used in clinical practice, and their sensitivity and accuracy are higher than those of less senior ultrasonographers, which reduces the workload of physicians and improves diagnostic efficiency to a certain extent. Existing assisted diagnosis systems mainly use texture information such as LBP statistical histograms or deep features extracted from deep networks as the basis for diagnosis, which is not in line with the diagnosis mode of physicians and does not employ medical knowledge for diagnosis, making it difficult to make trade-offs in terms of interpretability and diagnostic performance. In this work, a new interpretable reasoning paradigm from images to knowledge is proposed. Based on this paradigm, interpretable features are firstly perceived as the medical knowledge from the images. The perceived features may be inaccurately identified, then the interpretable features are further amended to obtain high-quality knowledge. Finally, the knowledge is constructed into a knowledge graph and the diagnosis results are obtained by interpretable knowledge inference on the knowledge graph. The experimental results demonstrate that our approach achieves a trade-off in interpretability and diagnostic performance compared to the mainstream diagnostic systems based on deep learning methods and those based on traditional machine learning methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI