可解释性
计算机科学
生成模型
任务(项目管理)
生成语法
人工智能
编码(集合论)
语言模型
自然语言处理
编码
机器学习
系列(地层学)
集合(抽象数据类型)
古生物学
经济
化学
管理
程序设计语言
基因
生物
生物化学
作者
Gayane Chilingaryan,Hovhannes Tamoyan,Ani Tevosyan,Nelly Babayan,Karen Hambardzumyan,Zaven Navoyan,Armen Aghajanyan,Hrant Khachatrian,Lusine Khondkaryan
标识
DOI:10.1021/acs.jcim.4c00512
摘要
We discover a robust self-supervised strategy tailored toward molecular representations for generative masked language models through a series of tailored, in-depth ablations. Using this pretraining strategy, we train BARTSmiles, a BART-like model with an order of magnitude more compute than previous self-supervised molecular representations. In-depth evaluations show that BARTSmiles consistently outperforms other self-supervised representations across classification, regression, and generation tasks, setting a new state-of-the-art on eight tasks. We then show that when applied to the molecular domain, the BART objective learns representations that implicitly encode our downstream tasks of interest. For example, by selecting seven neurons from a frozen BARTSmiles, we can obtain a model having performance within two percentage points of the full fine-tuned model on task Clintox. Lastly, we show that standard attribution interpretability methods, when applied to BARTSmiles, highlight certain substructures that chemists use to explain specific properties of molecules. The code and pretrained model are publicly available.
科研通智能强力驱动
Strongly Powered by AbleSci AI