人工智能
机器学习
计算机科学
Boosting(机器学习)
口译(哲学)
梯度升压
空间生态学
随机森林
生物
生态学
程序设计语言
标识
DOI:10.1016/j.compenvurbsys.2022.101845
摘要
Machine learning and artificial intelligence (ML/AI), previously considered black box approaches, are becoming more interpretable, as a result of the recent advances in eXplainable AI (XAI). In particular, local interpretation methods such as SHAP (SHapley Additive exPlanations) offer the opportunity to flexibly model, interpret and visualise complex geographical phenomena and processes. In this paper, we use SHAP to interpret XGBoost (eXtreme Gradient Boosting) as an example to demonstrate how to extract spatial effects from machine learning models. We conduct simulation experiments that compare SHAP-explained XGBoost to Spatial Lag Model (SLM) and Multi-scale Geographically Weighted Regression (MGWR) at the parameter level. Results show that XGBoost estimates similar spatial effects as those in SLM and MGWR models. An empirical example of Chicago ride-hailing modelling is presented to demonstrate the utility of SHAP with real datasets. Examples and evidence in this paper suggest that locally interpreted machine learning models are good alternatives to spatial statistical models and perform better when complex spatial and non-spatial effects (e.g. non-linearities, interactions) co-exist and are unknown.
科研通智能强力驱动
Strongly Powered by AbleSci AI