计算机科学
人工智能
推论
降维
编码器
深度学习
机器学习
自然语言处理
操作系统
作者
Ahmed Elnaggar,Michael Heinzinger,Christian Dallago,Ghalia Rihawi,Yu Wang,Llion Jones,Tom Gibbs,Tamás Fehér,Christoph Angerer,Martin Steinegger,Debsindhu Bhowmik,Burkhard Rost
出处
期刊:Cornell University - arXiv
日期:2020-01-01
被引量:189
标识
DOI:10.48550/arxiv.2007.06225
摘要
Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models taken from NLP. These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive models (Transformer-XL, XLNet) and four auto-encoder models (BERT, Albert, Electra, T5) on data from UniRef and BFD containing up to 393 billion amino acids. The LMs were trained on the Summit supercomputer using 5616 GPUs and TPU Pod up-to 1024 cores. Dimensionality reduction revealed that the raw protein LM-embeddings from unlabeled data captured some biophysical features of protein sequences. We validated the advantage of using the embeddings as exclusive input for several subsequent tasks. The first was a per-residue prediction of protein secondary structure (3-state accuracy Q3=81%-87%); the second were per-protein predictions of protein sub-cellular localization (ten-state accuracy: Q10=81%) and membrane vs. water-soluble (2-state accuracy Q2=91%). For the per-residue predictions the transfer of the most informative embeddings (ProtT5) for the first time outperformed the state-of-the-art without using evolutionary information thereby bypassing expensive database searches. Taken together, the results implied that protein LMs learned some of the grammar of the language of life. To facilitate future work, we released our models at https://github.com/agemagician/ProtTrans.
科研通智能强力驱动
Strongly Powered by AbleSci AI