等变映射
变压器
计算机科学
建筑
图形
理论计算机科学
人工智能
拓扑(电路)
数学
纯数学
工程类
组合数学
电气工程
电压
艺术
视觉艺术
作者
Yi-Lun Liao,Tess Smidt
出处
期刊:Cornell University - arXiv
日期:2022-06-23
标识
DOI:10.48550/arxiv.2206.11990
摘要
Despite their widespread success in various domains, Transformer networks have yet to perform well across datasets in the domain of 3D atomistic graphs such as molecules even when 3D-related inductive biases like translational invariance and rotational equivariance are considered. In this paper, we demonstrate that Transformers can generalize well to 3D atomistic graphs and present Equiformer, a graph neural network leveraging the strength of Transformer architectures and incorporating SE(3)/E(3)-equivariant features based on irreducible representations (irreps). First, we propose a simple and effective architecture by only replacing original operations in Transformers with their equivariant counterparts and including tensor products. Using equivariant operations enables encoding equivariant information in channels of irreps features without complicating graph structures. With minimal modifications to Transformers, this architecture has already achieved strong empirical results. Second, we propose a novel attention mechanism called equivariant graph attention, which improves upon typical attention in Transformers through replacing dot product attention with multi-layer perceptron attention and including non-linear message passing. With these two innovations, Equiformer achieves competitive results to previous models on QM9, MD17 and OC20 datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI