A novel framework for artificial intelligence explainability via the Technology Acceptance Model and Rapid Estimate of Adult Literacy in Medicine using machine learning
人工智能
计算机科学
机器学习
成人识字
读写能力
数据科学
知识管理
心理学
教育学
作者
Dimitrios P. Panagoulias,Maria Virvou,George A. Tsihrintzis
The significant proliferation of AI-empowered systems and machine learning (ML) across various examined domains underscores the vital necessity for comprehensive and customised explainability frameworks to lead to usable and trustworthy systems. Especially in the medical domain, where validation of methodologies and outcomes is as important as the adoption rate of such systems, the requirements of the depth and the level of abstraction of the explainability are particularly important and necessitate a systemic approach to ensure a proper definition. Explainability and interpretability are important usability and trustworthiness properties of AI-empowered systems and, as such, constitute important factors for technology acceptance. In this paper, we propose a novel framework for explainability requirements in AI-empowered systems using the Technology Acceptance Model (TAM). This framework employs targeted ML (hierarchical clustering, k-means or other) to acquire a user model for personalised, multi-layered explainability. Our novel framework integrates a rule-based system, which guides the degree of trustworthiness to be achieved based on user perception and AI literacy level. We test this methodology in the case of AI-empowered medical systems to (1) assess and quantify the doctors’ abilities and familiarisation with technology and AI, (2) generate layers of personalised explainability based on user ability and user needs in terms of trustworthiness and (3) provide the necessary environment for transparency and validation. To assess and quantify the doctors’ abilities we have considered Rapid Estimate of Adult Literacy in Medicine (REALM) a tool commonly used in the medical domain to bridge the communication gap between patients and doctors.