Using knowledge Graphs to Enhance the Interpretability of Clinical Decision Support Model
可解释性
计算机科学
决策支持系统
人工智能
机器学习
理论计算机科学
知识管理
作者
Huang Jin-ming,Liang Xiao,Junyi Yang,SiMing Chen
标识
DOI:10.1109/iccsmt51754.2020.00030
摘要
Current clinical practice relies heavily on technology to support decision-making. In particular, machine learning is increasingly used in decision support systems. This can be attributed to information overload, a fact that clinicians cannot consider all available information. The disadvantage of this method is that this kind of Clinical Decision Support Systems (CDSSs) is usually a black box, and it can't understand its decision-making reasons. However, in a healthcare environment, trust and accountability are important issues, and such systems should best be interpretable. In contrast, other areas rely almost entirely on observational or subjective patient reported questionnaires to quantify medical conditions. Developers need to use cognitive science based Human-Computer Interaction (HCI) research methods to design practice models, including user-centered iterative design and common standards. The main work of this paper is to propose a clinical decision support model with enhanced interpretability, including an automated interface generation engine. In the design of personalized decision-making, enhance the universality of decision-making push. The clinical evidence is input and displayed in the form of tables, and the medical concepts and their matching with SNOMED CT terms, consistent navigation, and finally displayed in the form of knowledge spectrum. Enhance the flexibility of interaction and integrate workflow seamlessly. As a result, domain experts can get advice quickly and take appropriate actions at convenient points in the workflow without additional effort or delay. Optimizing the interaction and availability of CDSS with providers can enhance the use of CDSS. The iterative design of CDSS improves the usability of the system and the user's popularity score. Our analysis shows that modern machine learning methods can provide interpretability compatible with domain Interpretation Knowledge Base (IKB) and traditional method ranking. Future work should focus on replicating these findings in other datasets and further testing different interpretable methods.