基因组学
计算生物学
计算机科学
基因组
DNA测序
变压器
背景(考古学)
优先次序
生物
机器学习
遗传学
DNA
基因
工程类
电压
电气工程
古生物学
管理科学
作者
Hugo Dalla-Torre,Liam Gonzalez,Javier Mendoza‐Revilla,Nicolás López Carranza,Adam Grzywaczewski,Francesco Oteri,Christian Dallago,Evan Trop,Bernardo P. de Almeida,Hassan Sirelkhatim,Guillaume Richard,Marcin J. Skwark,Karim Beguir,M. Lopez,Thomas Pierrot
标识
DOI:10.1038/s41592-024-02523-z
摘要
The prediction of molecular phenotypes from DNA sequences remains a longstanding challenge in genomics, often driven by limited annotated data and the inability to transfer learnings between tasks. Here, we present an extensive study of foundation models pre-trained on DNA sequences, named Nucleotide Transformer, ranging from 50 million up to 2.5 billion parameters and integrating information from 3,202 human genomes and 850 genomes from diverse species. These transformer models yield context-specific representations of nucleotide sequences, which allow for accurate predictions even in low-data settings. We show that the developed models can be fine-tuned at low cost to solve a variety of genomics applications. Despite no supervision, the models learned to focus attention on key genomic elements and can be used to improve the prioritization of genetic variants. The training and application of foundational models in genomics provides a widely applicable approach for accurate molecular phenotype prediction from DNA sequence. Nucleotide Transformer is a series of genomics foundation models of different parameter sizes and training datasets that can be applied to various downstream tasks by fine-tuning.
科研通智能强力驱动
Strongly Powered by AbleSci AI