计算机科学
自编码
分子图
人工智能
图形
特征学习
机器学习
变压器
药物发现
代表(政治)
深度学习
模式识别(心理学)
理论计算机科学
化学
物理
政治学
法学
电压
生物化学
政治
量子力学
作者
Zhen Wang,Zhenghe Feng,Yanjun Li,Bowen Li,Yongrui Wang,Sha Chen,Min He,Xin Li
摘要
Abstract Although substantial efforts have been made using graph neural networks (GNNs) for artificial intelligence (AI)-driven drug discovery, effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, the approaches in these studies require multiple complex self-supervised tasks and large-scale datasets , which are time-consuming, computationally expensive and difficult to pre-train end-to-end. Here, we design a simple yet effective self-supervised strategy to simultaneously learn local and global information about molecules, and further propose a novel bi-branch masked graph transformer autoencoder (BatmanNet) to learn molecular representations. BatmanNet features two tailored complementary and asymmetric graph autoencoders to reconstruct the missing nodes and edges, respectively, from a masked molecular graph. With this design, BatmanNet can effectively capture the underlying structure and semantic information of molecules, thus improving the performance of molecular representation. BatmanNet achieves state-of-the-art results for multiple drug discovery tasks, including molecular properties prediction, drug–drug interaction and drug–target interaction, on 13 benchmark datasets, demonstrating its great potential and superiority in molecular representation learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI