人工神经网络
Nexus(标准)
人工智能
计算机科学
机器学习
特征(语言学)
语言学
哲学
嵌入式系统
作者
Moritz Walter,Samuel J. Webb,Valerie J. Gillet
标识
DOI:10.1021/acs.jcim.4c00127
摘要
Neural network models have become a popular machine-learning technique for the toxicity prediction of chemicals. However, due to their complex structure, it is difficult to understand predictions made by these models which limits confidence. Current techniques to tackle this problem such as SHAP or integrated gradients provide insights by attributing importance to the input features of individual compounds. While these methods have produced promising results in some cases, they do not shed light on how representations of compounds are transformed in hidden layers, which constitute how neural networks learn. We present a novel technique to interpret neural networks which identifies chemical substructures in training data found to be responsible for the activation of hidden neurons. For individual test compounds, the importance of hidden neurons is determined, and the associated substructures are leveraged to explain the model prediction. Using structural alerts for mutagenicity from the Derek Nexus expert system as ground truth, we demonstrate the validity of the approach and show that model explanations are competitive with and complementary to explanations obtained from an established feature attribution method.
科研通智能强力驱动
Strongly Powered by AbleSci AI