计算机科学
编码器
可见的
图形
人工智能
机器学习
理论计算机科学
物理
量子力学
操作系统
作者
Janghoon Ock,Chakradhar Guntuboina,Amir Barati Farimani
标识
DOI:10.1021/acscatal.3c04956
摘要
Efficient catalyst screening necessitates predictive models for adsorption energy, which is a key descriptor of reactivity. Prevailing methods, notably graph neural networks (GNNs), demand precise atomic coordinates for constructing graph representations, while the integration of observable attributes remains challenging. This research introduces CatBERTa, an energy prediction Transformer model that uses textual inputs. Built on a Transformer encoder pretrained for language modeling purposes, CatBERTa processes human-interpretable text, incorporating target features. Attention score analysis reveals CatBERTa’s focus on tokens related to adsorbates, bulk composition, and their interacting atoms. Moreover, interacting atoms emerge as effective descriptors for adsorption configurations, while factors such as the bond length and atomic properties of these atoms offer limited predictive contributions. In predicting the adsorption energy from textual representations of initial structures, CatBERTa exhibits a precision comparable to that of conventional GNNs. Notably, in subsets recognized for their high accuracy with GNNs, CatBERTa consistently achieves a mean absolute error of 0.35 eV. Furthermore, the subtraction of the CatBERTa-predicted energies effectively cancels out their systematic errors by as much as 19.3% for chemically similar systems, surpassing the error reduction observed in GNNs. This outcome highlights its potential to enhance the accuracy of the energy difference predictions. This research establishes a fundamental framework for text-based catalyst property prediction without relying on graph representations while also unveiling intricate feature–property relationships.
科研通智能强力驱动
Strongly Powered by AbleSci AI