入职培训
可解释性
计算机科学
过程(计算)
人工智能
桥(图论)
黑匣子
可扩展性
知识管理
机器学习
数据科学
管理
医学
数据库
内科学
经济
操作系统
作者
Devottam Gaurav,Sanju Tiwari
标识
DOI:10.1109/iccosite57641.2023.10127717
摘要
To understand the complex nature of the Artificial Intelligence (AI) model, the model needs to be more trustable, transparent, scalable, understandable, and explainable. The trust of the AI model is concluded based on the decision taken by the AI model in its black box environment. Thus, Explainable AI (XAI) helps the developers to understand how the AI model behaves/performs while making a particular decision. With more complex AI models, scientists face difficulty in understanding the model outcome. Hence, XAI is required to explain the decision-making process of an AI model. However, to build trust-based AI models, organization embeds ethical principles in the AI processes. In our research paper, we studied the case of the banking sector where an inefficient onboarding process fails to establish a customer-based relationship. Due to the inefficient onboarding process, banks lose users’ faith which creates a gap in the customer-based relationship and hampers the onboarding process. To bridge this gap, we explain the decision-making process of the AI model through XAI.
科研通智能强力驱动
Strongly Powered by AbleSci AI