Translator | Cui Hao
Reviewer | Sun Shujuan
Artificial Intelligence (AI) may be in its early stages of development, but it has the potential to revolutionize the way humans interact with technology.
When it comes to artificial intelligence, there are currently two main views. Some believe that AI will eventually surpass human intelligence, while others believe that AI will always serve humanity. There's one thing both sides can agree on: Artificial intelligence is developing at an ever-increasing pace.
Artificial Intelligence (AI) is still in its early stages of development, but it has the potential to revolutionize the way humans interact with technology.
A simple, general description is that artificial intelligence is the process of programming a computer to make decisions on its own. This can be achieved in a variety of ways, but most commonly through the use of algorithms. An algorithm is a set of rules or instructions that can be followed to solve a problem. In the case of artificial intelligence, algorithms are used to teach computers how to make decisions.
#In the past, artificial intelligence was mainly used for simple tasks, such as playing chess or solving math problems. Now, artificial intelligence is being used for more complex tasks such as facial recognition, natural language processing, and even autonomous driving. As artificial intelligence continues to develop, we don’t know what capabilities it will have in the future. As AI capabilities rapidly expand, it’s important to understand what it is, how it works, and its potential impact.
#The benefits brought by artificial intelligence are huge. With the ability to make decisions on its own, AI has the potential to make countless industries more efficient and provide opportunities for all types of people. In this article, we will talk about GPT-3.
GPT-3 was created by OpenAI, a pioneering AI research company based in San Francisco. They define their goal as "ensuring that artificial intelligence benefits all mankind." Their vision for creating artificial intelligence is clear: an artificial intelligence that is not limited to specialized tasks, but can perform a variety of tasks like humans.
# A few months ago, the company OpenAI released its new language model called GPT-3 to all users. GPT-3 is the abbreviation of Generative Pretrained Transformer 3, which includes the ability to generate text through a premise called Prompt. Simply put, it has high-level "auto-complete" capabilities. For example, you only need to provide two or three sentences on a given topic, and GPT-3 will do the rest. You can also generate conversations, and the answers given by GPT-3 will be based on the context of previous questions and answers.
It should be emphasized that each answer provided by GPT-3 is only a possibility, so it will not be the only possible answer. Furthermore, if you test the same premise multiple times, it may provide a different or even contradictory answer. So it's a model that returns an answer based on what has been said previously and connects that to everything you know to get the most reasonable answer. This means that it is not obligated to give an answer with real data, which is something we must take into account. This does not mean that users cannot disclose relevant work data, but GPT-3 needs to compare this data with contextual information. The more comprehensive the context, the more reasonable the answer you will get, and vice versa.
OpenAI’s GPT-3 language model is pre-trained, and the training includes studying large amounts of information on the Internet. GPT-3 is fed into all publicly available books, the entire content of Wikipedia, and millions of web pages and scientific papers on the Internet. In short, it incorporates the most important human knowledge we have published on the web throughout history.
#After reading and analyzing this information, the language model created connections in the 700GB model located on 48 16GB GPUs. To allow us to understand this dimension, the previous OpenAI model, the GPT-2 model, was 40GB in size and analyzed 45 million web pages. The difference is huge, because GPT-2 has 1.5 billion parameters, while GPT-3 has 175 billion parameters.
#Let’s do a test, shall we? I asked GPT-3 how to define itself and the result is as follows:
The only thing we have to do in order to be able to use GPT-3 and test it is to go to their website, Register and add personal information. During the process you will be asked: What will you use artificial intelligence for? For these examples, I chose the "Personal Use" option.
# I would like to point out that in my experience it works better in an English context. That doesn't mean it doesn't work well in other languages; in fact, in Spanish it does it very well, but I prefer the results it gives in English, which is why from now on I'm showing Tests and results are in English.
#GPT-3 gave us a free gift when we entered. Once you sign up with your email and phone number, you'll have $18 to use completely free with no need to enter a payment method. Although it may not seem like much, in fact, $18 is quite a lot. To give you an idea, I've been testing the AI for five hours and it only cost me $1. Later I will explain the prices so we can understand this better.
#Once we enter the website we will have to go to the Playground section. This is where all the magic happens.
First of all, the most eye-catching thing on the Internet is the big Text box. This is where we can start inputting prompts into the AI (remember, these are our requests and/or instructions). It's as simple as entering something, in this case a question, and clicking the submit button below to let GPT-3 answer us and write what we've asked for.
Preset is a function that can be executed at any time for different tasks. They can be found in the upper right corner of the text box. If we click on several of them, "More Examples" will open a new screen where we will have the entire list available. When a preset is selected, the contents of the text area are updated with the default text. The settings in the right sidebar will also be updated. For example, if we want to use the "Grammar Correction" preset, we should follow the following structure for best results.
The large data set used to train GPT-3, it is GPT-3 The main reason why it is so powerful. However, bigger doesn't always mean better. For these reasons, OpenAI provides four main models. There are of course other models, but we would be advised to use the latest version, which is what we are using now.
#The available models are called Davinci, Babbage, Curie, and Ada. Of the four models, Davinci is the largest and most capable, as it can cover any task performed by the other engines.
#We will provide an overview of the model and the types of tasks that the model matches. Keep in mind that while smaller engines may not have been trained on as much data, they are still general-purpose models that are very feasible and convenient for certain tasks.
As mentioned above, it is the most capable model and can do everything any other model can do, usually only Requires fewer instructions. Leonardo da Vinci was able to solve logical problems, determine cause and effect relationships, understand textual intent, produce creative content, explain character motivations, and handle complex summarizing tasks.
This model attempts to balance computing power and speed. It can do anything Ada or Babbage can do, but it can also handle more complex classification tasks and more nuanced tasks such as summarization, sentiment analysis, chatbot applications, and question answering.
Its ability is slightly stronger than Ada, but not as efficient. It can perform all the same tasks as Ada, but can also handle slightly more complex classification tasks, making it ideal for semantic search tasks that classify how well a document matches a search query.
Finally, this is usually the fastest and cheapest model. It is best suited for less nuanced tasks, such as parsing text, reformatting text, and simpler classification tasks. The more context you provide Ada, the better it performs.
The other parameters we can adjust to get the best response to our cues are the models.
#One of the most important settings that controls the output of the GPT-3 engine is Temperature. This setting controls the randomness of the generated text. At a value of 0, the engine is deterministic, meaning that for a given text input, it will always produce the same output. At a value of 1, the engine takes the greatest risks and uses a lot of creativity.
#You may have noticed that in some of the tests you were able to run yourself, GPT-3 would stop in the middle of a sentence. To control the maximum amount of text we will allow to be generated, you can use the "max-length" setting specified in a token. We will explain what this token is later.
The "Top P" parameter can control the randomness and creativity of GPT-3 text, but in this case, with the token (word) within the probability range ) depends on where we place it (0.1 would be 10%). The OpenAI documentation recommends using only one function between Temperature and Top P, so when using one, make sure the other is set to 1.
#On the other hand, we have two parameters to penalize the answer given by GPT-3. One of these is the "frequency penalty", which controls the model's tendency to make repeated predictions. It also reduces the probability that a word has been generated and depends on how many times a word has appeared in the prediction.
#The second penalty is the existence penalty. The presence of a penalty parameter encourages the model to make new predictions. If a word has already appeared in the predicted text, there is a penalty that reduces the probability of that word. Unlike the frequency penalty, the presence penalty does not depend on how often the word appeared in past predictions.
#Finally, we have a "best" parameter which produces several answers to a query. Playground will choose the best one to respond to us. GPT-3 will warn that several complete answers to the prompt will result in spending more tokens.
#To complete this section, the third icon next to the "Submit" button will display our commitment to GPT-3 All historical requests. Here you can find the prompts for the best-performing responses.
GPT-3 also provides a way to continue using its platform once the free $18 credit is exhausted , not a monthly subscription or anything like that. Price will be directly related to usage. In other words, you are charged according to the token. This is a term used for artificial intelligence, where tokens are related to the cost of output. A token can be anything from a letter to a sentence. Therefore, it’s difficult to know exactly how much each use of AI will cost. But given that they're usually pennies on the dollar, it doesn't take long to see how much everything costs with just a little experimentation.
Although OpenAI only shows us a dozen examples of GPT-3 usage, we can see the tokens spent for each example, thus Better understand how it works.
#These are the versions and their respective prices.
# To give us an idea of how much a certain number of words might cost, or give us an idea of how the markup works For example, we have the following tool, called Tokenizer.
It tells us that the GPT series of models process text using tokens, which are common sequences of characters found in text. The model understands the statistical relationship between tokens and is selected when the next token is used in the production sequence.
#Finally, this is a low level example of how much the same example would cost us.
From my point of view, GPT-3 is something that users must know how to When used correctly, GPT-3 does not necessarily give correct data. This means that if you want to use it to do work, answer questions, or do homework, you have to provide good context for the answers it gives you to be close to the results you want.
Some people worry about whether GPT-3 will change education, or whether some writing-related jobs that exist today will disappear because of it. In my humble opinion, this is going to happen. Sooner or later, we will all be replaced by artificial intelligence. This example is about artificial intelligence related to writing, but they exist in programming, painting, audio, etc.
#On the other hand, it opens up many more possibilities for many, many jobs and projects, both personal and professional. For example, have you ever wanted to write a horror story? This function can be specifically implemented in the example list of the grammar checker.
Having said so much, what I want to say is that we are in the early version of artificial intelligence. There are still many products in this world that need to grow and improve, but there are still many products that need to be grown and improved. Doesn't mean it didn't land. As long as we learn and use artificial intelligence, we need to continuously train it to give the best response.
Cui Hao, 51CTO community editor, senior architect, has 18 years of software development and architecture experience, 10 years Distributed architecture experience.
Original title:GPT-3 Playground: The AI That Can Write for Youby Isaac Alvarez
The above is the detailed content of GPT-3: Artificial intelligence that can write. For more information, please follow other related articles on the PHP Chinese website!