Home > Technology peripherals > AI > With reference to the human brain, will learning to forget make large AI models better?

With reference to the human brain, will learning to forget make large AI models better?

王林
Release: 2024-03-12 14:43:02
forward
1193 people have browsed it

With reference to the human brain, will learning to forget make large AI models better?

Recently, a team of computer scientists developed a more flexible and resilient machine learning model with the ability to periodically forget known information. Features that existing large language models do not have.

Actual testing shows that in many cases, the "forgetting method" is very efficient in training, and the forgetting model will perform better. Jea Kwon, a AI engineer at the Institute for Basic Science in Korea, said the new research represents significant progress in the AI field.

The "forgetting method" training efficiency is very high

Most of the current mainstream AI language engines use artificial neural network technology. Each "neuron" in this network structure is actually a mathematical function. They are connected to each other, receive and transmit information, and realize data processing and learning through complex operations of multiple layers of neurons. This simulation method of neural networks enables AI to simulate the working way of the human brain, thereby achieving human-like intelligent behavior.

In the beginning, the information flow is more or less random. As the network continues to match the training data, the information flowing between neurons will continue to optimize. For example, if a researcher wants to train a bilingual translation model, it first collects massive amounts of bilingual text and uses the text to train the model. It adjusts the connections between neurons to compare the text in one language with the equivalent text in another language. Connect effective words.

The above training requires a lot of computing resources. If the model performs poorly, or user needs change, the model may not be able to meet the needs.

Researcher Mikel Artetxe pointed out: "Suppose you have a model that contains 100 languages, but one language is not included. If you want to add this language to the model, you must retrain. ”

A few years ago, Artetxe and his colleagues trained a neural network on a language, and they erased the word composition information known to the neural network, which is called “Tokens”. Tokens are stored in the first layer of the neural network, which is also called the "embedding layer". For other layers, ignore them. After erasing the Tokens of the first language and training in the second language, new Tokens of the second language can be filled into the embedding layer.

Although the model contains a large amount of mismatched information, it can still be retrained in the second language, which means that the model can learn and process the second language. The researchers believe that although the embedding layer stores vocabulary-specific information of the second language, the neural network stores abstract information at the lower level, which involves the behind-the-scenes concepts of human language. It is these concepts that help the model learn the second language.

Chen Yihong, author of the research report, believes: "We live in the same world and use words in different languages ​​to express the same concepts. Therefore, there will be the same level of reasoning in the model, such as an apple, which is sweet It's delicious, it represents more than just one word."

Adding new languages ​​to the trained model, using the "forgetting method" is very efficient. However, it still needs to be retrained, and it still requires massive amounts of data. data and powerful processing power. Is there a better way? Of course, there is no need to train, just erase the embedding layer and then train again, that is, periodically reset the embedding layer during the initial training.

Artetxe said: "In this way, the entire model can adapt to the reset. If you want to extend the model and adapt it to another language, the process will become easier."

Forgetting models perform better

The researchers experimented with Roberta, a relatively general large language model, trained using periodic forgetting techniques, and compared it with models trained using standard, non-forgetting methods. The results showed that when processing the first language, the forgetting model scored 85.1 points and the traditional standard model scored 86.1 points. When training in the second language, using only about 5 million Tokens (70 billion were used in the first language), the accuracy score of the forgetting model dropped to 62.7 points, and the standard model dropped to 53.3 points.

If researchers impose computational constraints when retraining, forgetful models will perform better. For example, when the researchers shortened the training length from 125,000 steps to 5,000 steps, the average score of the unlearning model was about 57.8 points, and the standard model dropped to 37.2 points, almost guessing.

So the researchers concluded that the forgetting model performed better when learning language.

Evgenii Nikishin, a researcher at Quebec's deep learning research center Mila, said: "Because the model is constantly unlearning and then relearning during training, it will be easier to teach the network something new later." There are indications that the model will look at a deeper level when understanding language, not just understanding the meaning of individual words.

The forgetting method is somewhat similar to the operating mode of the human brain. Benjamin Levy, a neuroscientist at the University of San Francisco, believes: "Human memory is quite imprecise when storing large amounts of detailed information. But the human brain can remember the key points of experience, remember abstract information, and is good at inferring. Let AI process information like humans, such as letting It has the ability to forget, and AI may be more flexible."

Yihong Chen believes that factories manufacturing language models may appear in the future. Such factories require forgetting technology. It is a basic model that can be quickly adapted. New Field. (Knife)

The above is the detailed content of With reference to the human brain, will learning to forget make large AI models better?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template