可解释性
计算机科学
人工智能
变压器
生成语法
功能(生物学)
领域(数学)
工程类
生物
数学
电压
进化生物学
纯数学
电气工程
作者
Noelia Ferruz,Birte Höcker
标识
DOI:10.1038/s42256-022-00499-z
摘要
The twenty-first century is presenting humankind with unprecedented environmental and medical challenges. The ability to design novel proteins tailored for specific purposes would potentially transform our ability to respond to these issues in a timely manner. Recent advances in the field of artificial intelligence are now setting the stage to make this goal achievable. Protein sequences are inherently similar to natural languages: amino acids arrange in a multitude of combinations to form structures that carry function, the same way as letters form words and sentences carry meaning. Accordingly, it is not surprising that, throughout the history of natural language processing (NLP), many of its techniques have been applied to protein research problems. In the past few years we have witnessed revolutionary breakthroughs in the field of NLP. The implementation of transformer pre-trained models has enabled text generation with human-like capabilities, including texts with specific properties such as style or subject. Motivated by its considerable success in NLP tasks, we expect dedicated transformers to dominate custom protein sequence generation in the near future. Fine-tuning pre-trained models on protein families will enable the extension of their repertoires with novel sequences that could be highly divergent but still potentially functional. The combination of control tags such as cellular compartment or function will further enable the controllable design of novel protein functions. Moreover, recent model interpretability methods will allow us to open the ‘black box’ and thus enhance our understanding of folding principles. Early initiatives show the enormous potential of generative language models to design functional sequences. We believe that using generative text models to create novel proteins is a promising and largely unexplored field, and we discuss its foreseeable impact on protein design. Both proteins and natural language are essentially based on a sequential code, but feature complex interactions at multiple scales, which can be useful when transferring machine learning models from one domain to another. In this Review, Ferruz and Höcker summarize recent advances in language models, such as transformers, and their application to protein design.
科研通智能强力驱动
Strongly Powered by AbleSci AI