可解释性
计算机科学
机器学习
人工智能
吞吐量
特征(语言学)
透视图(图形)
过程(计算)
事后
鉴定(生物学)
操作系统
哲学
生物
电信
牙科
无线
医学
植物
语言学
作者
Noushin Omidvar,Hemanth Somarajan Pillai,Shih-Han Wang,Tianyou Mou,Siwen Wang,Andy Athawale,Luke E.K. Achenie,Hongliang Xin
标识
DOI:10.1021/acs.jpclett.1c03291
摘要
Understanding the nature of chemical bonding and its variation in strength across physically tunable factors is important for the development of novel catalytic materials. One way to speed up this process is to employ machine learning (ML) algorithms with online data repositories curated from high-throughput experiments or quantum-chemical simulations. Despite the reasonable predictive performance of ML models for predicting reactivity properties of solid surfaces, the ever-growing complexity of modern algorithms, e.g., deep learning, makes them black boxes with little to no explanation. In this Perspective, we discuss recent advances of interpretable ML for opening up these black boxes from the standpoints of feature engineering, algorithm development, and post hoc analysis. We underline the pivotal role of interpretability as the foundation of next-generation ML algorithms and emerging AI platforms for driving discoveries across scientific disciplines.
科研通智能强力驱动
Strongly Powered by AbleSci AI