可解释性
计算机科学
预测分析
过程(计算)
人工智能
机器学习
事件(粒子物理)
构造(python库)
预测建模
特征(语言学)
分析
黑匣子
业务流程
机制(生物学)
数据挖掘
数据科学
在制品
工程类
语言学
运营管理
程序设计语言
哲学
物理
操作系统
认识论
量子力学
作者
Bemali Wickramanayake,Zhipeng He,Chun Ouyang,Catarina Moreira,Yue Xu,Renuka Sindhgatta
标识
DOI:10.1016/j.knosys.2022.108773
摘要
Predictive process analytics, often underpinned by deep learning techniques, is a newly emerged discipline dedicated for providing business process intelligence in modern organisations. Whilst accuracy has been a dominant criterion in building predictive capabilities, the use of deep learning techniques comes at the cost of the resulting models being used as 'black boxes', i.e., they are unable to provide insights into why a certain business process prediction was made. So far, little attention has been paid to interpretability in the design of deep learning-based process predictive models. In this paper, we address the 'black-box' problem in the context of predictive process analytics by developing attention-based models that are capable to inform both what and why is a process prediction. We propose i) two types of attentions—event attention to capture the impact of specific events on a prediction, and attribute attention to reveal which attribute(s) of an event influenced the prediction; and ii) two attention mechanisms—shared attention mechanism and specialised attention mechanism to reflect different design decisions between whether to construct attribute attention on individual input features (specialised) or using the concatenated feature tensor of all input feature vectors (shared). These lead to two distinct attention-based models, and both are interpretable models that incorporate interpretability directly into the structure of a process predictive model. We conduct experimental evaluation of the proposed models using real-life dataset and comparative analysis between the models for accuracy and interpretability, and draw insights from the evaluation and analysis results. The results demonstrate that i) the proposed attention-based models can achieve reasonably high accuracy; ii) both are capable of providing relevant interpretations (when validated against domain knowledge); and iii) whilst the two models perform equally in terms of prediction accuracy, the specialised attention-based model tends to provide more relevant interpretations than the shared attention-based model, reflecting the fact that the specialised attention-based model is designed to facilitate better interpretability.
科研通智能强力驱动
Strongly Powered by AbleSci AI