可解释性
计算机科学
机器学习
人工智能
人工神经网络
特征(语言学)
时间序列
空气质量指数
系列(地层学)
质量(理念)
数据挖掘
古生物学
哲学
语言学
物理
认识论
气象学
生物
作者
María Vega García,Jose L Aznarte
标识
DOI:10.1016/j.ecoinf.2019.101039
摘要
In this paper, we address the problem of the interpretability of a machine learning model designed to predict air quality time series. When constructing a forecasting model, in addition to obtaining good accuracy, it is utterly important to understand why each prediction is made. Usually, interpreting the output of machine learning models is considered to be very difficult due to their complex “black box” architecture. However, we show how Shapley additive explanations can be used to interpret the outputs of a deep neural network designed to predict Nitrogen dioxide concentrations in Madrid. This method computes an estimation of the contribution of each feature for a particular prediction. Furthermore, we compare three explanatory methods to determine which one is more suitable for the air quality data and for the chosen machine learning model. A deeper insight into how the model behaves when predicting the pollution time series is obtained.
科研通智能强力驱动
Strongly Powered by AbleSci AI