Home > Technology peripherals > AI > Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

WBOY
Release: 2023-04-09 14:01:03
forward
1196 people have browsed it

The largest protein language model to date has been released!

A year ago, DeepMind’s open source AlphaFold2 was launched in Nature and Science, overwhelming the biological and AI academic circles.

A year later, Meta came with ESMFold, which was an order of magnitude faster.

Not only is it fast, the model also has 15 billion parameters.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

LeCun tweeted to praise this as a great new achievement by the Meta-FAIR protein team.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

Co-author Zeming Lin revealed that the large model with 3 billion parameters was trained on 256 GPUs for 3 weeks, while ESMfold took 10 days on 128 GPUs. As for the 15 billion parameter version, it is still unclear.

He also said that the code will definitely be open sourced later, so stay tuned!

Big and fast!

Today, our protagonist is ESMFold, a model that directly predicts high-accuracy, end-to-end, atomic-level structure from individual protein sequences.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

Paper address: https://www.biorxiv.org/content/10.1101/2022.07.20.500902v1

The benefits of 15 billion parameters Needless to say – today’s large models can be trained to predict the three-dimensional structure of proteins with atomic-sized accuracy.

In terms of accuracy, ESMFold is similar to AlphaFold2 and RoseTTAFold.

However, ESMFold’s inference speed is an order of magnitude faster than AlphaFold2!

It may be difficult to understand the speed comparison between the three by talking about the order of magnitude. Just look at the picture below to understand.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

What’s the difference?

Although AlphaFold2 and RoseTTAFold have achieved breakthrough success in the problem of atomic resolution structure prediction, they also rely on the use of multiple sequence alignments (MSA) and similar protein structure templates for optimal performance.

In contrast, by leveraging the internal representation of the language model, ESMFold can generate corresponding structure predictions using only one sequence as input, thus greatly speeding up structure prediction.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

The researchers found that ESMFold’s predictions for low-complexity sequences were comparable to current state-of-the-art models.

Moreover, the accuracy of structure prediction is closely related to the complexity of the language model. That is to say, when the language model can better understand the sequence, it can better understand the structure.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

Currently, there are billions of protein sequences of unknown structure and function, many of which are derived from metagenomic sequencing.

Using ESMFold, researchers can fold a random sample of 1 million metagenomic sequences in just 6 hours.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

A large proportion of these have high confidence and are unlike any known structure (have no records in the database).

Researchers believe that ESMFold can help understand protein structures that are beyond current understanding.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

Additionally, because ESMFold’s predictions are an order of magnitude faster than existing models, researchers can use ESMFold to help fill rapidly growing protein sequence databases and slow progress. The gap between protein structure and function databases.

15 billion parameter protein language model

Next let’s talk about Meta’s new ESMFold in detail.

ESM-2 is a Transformer-based language model and uses an attention mechanism to learn the interaction patterns between pairs of amino acids in the input sequence.

Compared with the previous generation model ESM-1b, Meta has improved the model structure and training parameters, and added computing resources and data. At the same time, the addition of relative position embedding enables the model to be generalized to sequences of any length.

From the results, the ESM-2 model with 150 million parameters performed better than the ESM-1b model with 650 million parameters.

In addition, ESM-2 also surpasses other protein language models on the benchmark of structure prediction. This performance improvement is consistent with established patterns in the large language modeling field.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

As the scale of ESM-2 increases, it can be observed that the accuracy of language modeling has greatly improved.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

End-to-end single sequence structure prediction

A key difference between SMFold and AlphaFold2 is that ESMFold uses language model representation, which eliminates the need for explicit homology Sequences (in the form of MSA) are required as input.

ESMFold simplifies the Evoformer in AlphaFold2 by replacing the computationally expensive network module that handles MSA with a Transformer module that handles sequences. This simplification means that ESMFold is significantly faster than MSA-based models.

The output of the folded backbone is then processed by a structure module, which is responsible for outputting the final atomic-level structure and prediction confidence.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

Researchers compared ESMFold with AlphaFold2 and RoseTTAFold on the CAMEO (April 2022 to June 2022) and CASP14 (May 2020) test sets.

When only a single sequence is given as input, ESMFold performs much better than Alphafold 2.

When using the complete pipeline, AlphaFold2 achieved 88.3 and 84.7 on CAMEO and CASP14 respectively. ESMFold achieves comparable accuracy to RoseTTAfold on CAMEO, with an average TM score of 82.0.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

Conclusion

The researchers found that language models targeting unsupervised learning performed well on a large Trained on an evolutionarily diverse protein sequence database, it can predict protein structures with atomic-level resolution.

By expanding the parameters of the language model to 15B, the impact of scale on protein structure learning can be systematically studied.

We saw that the nonlinear curve of protein structure predictions is a function of model size, and observed a strong connection between how well a language model understands a sequence and its structure predictions.

The models of the ESM-2 series are the largest protein language models trained to date, with only an order of magnitude fewer parameters than the largest recently developed text models.

Moreover, ESM-2 is a very big improvement over the previous model. Even under 150M parameters, ESM-2 captures more accurately than the ESM-1 generation language model under 650 million parameters. Structure diagram.

Researchers said that the biggest driver of ESMFold performance is the language model. Because there is a strong connection between the perplexity of language models and the accuracy of structure predictions, they found that when ESM-2 can better understand protein sequences, it can achieve predictions comparable to current state-of-the-art models.

ESMFold has obtained accurate atomic resolution structure prediction, and the inference time is an order of magnitude faster than AlphaFold2.

In practice, the speed advantage is even greater. Because ESMFold does not need to search for evolutionarily related sequences to construct MSA.

Although there are faster ways to reduce search time, it may still be very long no matter how much it is reduced.

The benefits brought by the greatly shortened inference time are self-evident - the increase in speed will make it possible to map the structural space of large metagenomics sequence databases.

In addition to structure-based tools to identify distal homology and conservation, rapid and accurate structure prediction with ESMFold can also play an important role in the structural and functional analysis of large new sequence collections.

Obtaining millions of predicted structures in a limited time will help discover new understanding of the breadth and diversity of natural proteins and enable the discovery of completely new protein structures and protein functions.

Introduction to the author

The co-author of this article is Zeming Lin from Meta AI.

Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2

According to his personal homepage, Zeming studied for a PhD at New York University and worked as a research engineer (visiting) at Meta AI, mainly responsible for back-end infrastructure work.

He studied at the University of Virginia for both his bachelor's and master's degrees, where he and Yanjun Qi did research on machine learning applications, especially in protein structure prediction.

The areas of interest are deep learning, structure prediction, and information biology.

The above is the detailed content of Faster than 0! Meta launched a large protein model with 15 billion parameters to crush AlphaFold2. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template