化学
口译(哲学)
人工神经网络
支持向量机
随机森林
数量结构-活动关系
性格(数学)
黑匣子
人工智能
机器学习
数学
计算机科学
几何学
程序设计语言
作者
Raquel Rodríguez-Pérez,Jürgen Bajorath
标识
DOI:10.1021/acs.jmedchem.9b01101
摘要
In qualitative or quantitative studies of structure-activity relationships (SARs), machine learning (ML) models are trained to recognize structural patterns that differentiate between active and inactive compounds. Understanding model decisions is challenging but of critical importance to guide compound design. Moreover, the interpretation of ML results provides an additional level of model validation based on expert knowledge. A number of complex ML approaches, especially deep learning (DL) architectures, have distinctive black-box character. Herein, a locally interpretable explanatory method termed Shapley additive explanations (SHAP) is introduced for rationalizing activity predictions of any ML algorithm, regardless of its complexity. Models resulting from random forest (RF), nonlinear support vector machine (SVM), and deep neural network (DNN) learning are interpreted, and structural patterns determining the predicted probability of activity are identified and mapped onto test compounds. The results indicate that SHAP has high potential for rationalizing predictions of complex ML models.
科研通智能强力驱动
Strongly Powered by AbleSci AI