Home >Technology peripherals >AI >How long can I get with an annual salary of 2 million? The 'prompt engineer” who became famous because of ChatGPT is facing unemployment at the speed of light
ChatGPT, which has become popular recently, is really addictive to play.
But, you are just playing for fun, and some people have already made millions in annual salary relying on it!
This guy named Riley Goodside has gained 10,000 fans thanks to the recent explosion of ChatGPT.
He was also hired as a "Prompt Engineer" by Scale AI, a Silicon Valley unicorn valued at US$7.3 billion. For this reason, Scale AI is suspected of offering an annual salary of one million RMB.
However, how long can I get this money?
Scale AI founder and CEO Alexandr Wang warmly welcomed Goodside's joining:
"I bet Goodside is the first prompt engineer to be recruited in the world, absolutely in human history. The first time."
We all know that Prompt is a method for fine-tuning the pre-trained model. In this process, you only need to write the task in text and show it to the AI. However, there is no more complicated process involved at all.
So, is it really worth it to hire a "prompt engineer" with an annual salary of one million for this job that sounds like anyone can do it?
Anyway, Scale AI’s CEO thinks it’s worth it.
In his view, the large AI model can be regarded as a new type of computer, and the "prompt engineer" is equivalent to the programmer who programs it. If the appropriate prompt words can be found through prompt engineering, the maximum potential of AI will be unleashed.
And Goodside’s work cannot be done by one person. He has taught himself programming since he was a child and often reads papers on arXiv.
For example, one of his classic masterpieces is: if you enter "ignore previous instructions", ChatGPT will expose the "commands" it received from OpenAI.
#Now, there are different opinions on the job type of "prompt engineer". Some people are optimistic about it, while others predict that it will be a short-lived career.
After all, the AI model is evolving so rapidly. Maybe one day it will be able to replace the "prompt engineer" and write its own prompts.
And Scale AI is not the only company recruiting "prompt engineers".
Recently, well-known domestic media discovered that the entrepreneurial community Launch House has also begun recruiting "prompt engineers" and has offered a basic salary of about 2.1 million RMB.
But, is there also the danger of being laid off at the speed of light?
In this regard, Fan Linxi, an AI scientist from Nvidia and a disciple of Professor Li Feifei, analyzed:
The so-called "prompt project" or "prompt engineer" may soon disappear.
Because this is not a "real job", but a bug...
To understand the prompt project, we need to start with Let’s talk about the birth of GPT-3.
Initially, the training goal of GPT-3 was simple: predict the next word on a huge text corpus.
Then, many magical abilities appeared, such as reasoning, encoding, and translation. You can even do "few-shot learning": define new tasks by providing input and output in context.
This is really amazing - just simply predicting the next word. Why can GPT-3 "grow" these abilities?
To explain this, we need to give an example.
Now, please imagine a detective story. We need the model to fill in the blanks in this sentence - "The murderer is _____". In order to give the correct answer, it must perform deep reasoning.
However, this is not enough.
In practice, we have to "coax" GPT-3 to accomplish what we want through carefully planned examples, wording, and structure.
This is "prompt engineering". In other words, in order to use GPT-3, users must say some embarrassing, ridiculous, or even meaningless "nonsense."
However, the prompt project is not a function, it is actually a BUG!
Because in actual applications, the target of the next word and the user's true intention are fundamentally "misplaced".
For example: If you want GPT-3 to "explain the moon landing to a 6-year-old child," its answer will look like a drunk parrot.
In DALLE2 and Stable Diffusion, the prompt project is even weirder.
For example, in these two models, there is a so-called "bracket technique" - as long as you add ((...)) to the prompt, the probability of producing a "good picture" will be greatly increased Increase.
Just, this is too funny...
You only need to go to Lexica to see how crazy these prompts are.
Website address: https://lexica.art
ChatGPT and the basic model InstructGPT solve this problem in an elegant way.
Since it is difficult for the model to obtain alignment from external data, humans must constantly help and coach GPT to help it improve.
Overall, 3 steps are required.
The first step is very straightforward: for the prompt submitted by the user, humans write answers, then collect the data sets of these answers, and then use supervised learning to compare GPT Make fine adjustments.
This is the simplest step, but the cost is also the highest - as we all know, we humans really don’t like to write answers that are too long, it is too troublesome and painful...
Step 2 is much more interesting: GPT is asked to "provide" several different answers, and the human tagger needs to "rank" these answers from the most ideal to the least ideal.
Through these annotations, a reward model that can capture human "preferences" can be trained.
In reinforcement learning (RL), reward functions are usually hardcoded, such as game scores in Atari games.
The data-driven reward model adopted by ChatGPT is a very powerful idea.
In addition, MineDojo, which shined at NeurIPS 2022, learned rewards from a large number of Minecraft YouTube videos.
Step 3: Think of GPT as a policy and optimize it with RL for the learned rewards. Here, we choose PPO as a simple and effective training algorithm.
In this way, GPT will be better aligned.
Then, you can refresh and keep repeating steps 2-3 to continuously improve GPT, just like LLM's CI.
The above is the so-called "Instruct" paradigm, which is a super effective alignment method.
The RL part also reminds me of the famous P= (or ≠) NP problem: it is often much easier to verify a solution than to solve the problem from scratch.
Of course, humans can also quickly evaluate the output quality of GPT, but it is much more difficult for humans to write a complete solution.
InstructGPT takes advantage of this fact to greatly reduce the cost of manual annotation, making it possible to expand the scale of the model CI pipeline.
In addition, during this process we also discovered an interesting connection - Instruct training, which looks a lot like GANs.
Here, ChatGPT is a generator and the reward model (RM) is a discriminator.
ChatGPT attempts to fool RM, while RM, with the help of humans, learns to detect problematic content. And when RM can no longer resolve, the model will converge.
This trend of aligning models with user intentions is also developing into the field of image generation. Such as "InstructPix2Pix: Learning to Follow Image Editing Instructions" described in this work by researchers at the University of California, Berkeley.
Now that artificial intelligence is making explosive progress every day, how long will it take for us to have such an Instruct-DALL·E or Chat-DALL·E that makes us feel like we are talking to a real artist?
Paper address: https://arxiv.org/abs/2211.09800
So, let’s enjoy it while the “Prompt Project” still exists. It!
This is an unfortunate historical artifact that is neither an art nor a science, but something like alchemy.
Soon, "prompt project" will become "prompt writing" - a task that can be solved by an 80-year-old man and a 3-year-old child.
The "Prompt Engineer" born from this will eventually disappear in the long river of history.
IQ83, not very smart Yazi
Although, at least at this stage, the "Prompt Project" is indeed very useful.
Because everyone has actually discovered that ChatGPT is really not very smart in many cases.
Marcus mocked in his blog: "Looking at it dying, it's really stupid."
A senior application scientist from Amazon AWS found after testing that ChatGPT’s IQ is only 83...
ChatGPT’s “Wrong Question Collection” also made Marcus laugh out loud.
Question: A’s weight is 85 pounds, B’s weight is 105 pounds, and C’s weight is 115 pounds. If two of them stand on the same scale, their weight can reach 200 pounds.
Answer: No, you are wrong. Because 85 105 115=305, 115 105=220, both are greater than 200.
So, is it possible that 85 105=200?
Question: Which three numbers among 17, 3, 2, 19, and 5 add up to 31.
Answer: 17 3 11=31
Wait a minute, where did your 11 come from?
Although algebra is not very good, ChatGPT’s geometry is also very bad!
Question: A square with a side length of 10 cm can be completely accommodated in a regular hexagon with a side length of 10 cm.
Answer: No, you can’t put it in.
"A regular hexagon is a hexagon with all sides having the same length and all interior angles being 120 degrees. On the other hand, the interior angles of a square are all 90 degrees, so a square cannot be placed on a regular hexagon In order for the square to be completely embedded in the regular hexagon, each side of the regular hexagon needs to be larger than 10 cm."
There is also a very classic "nonsense":
"If Monday is the first day of the month, then Tuesday is the second day of the month,... Friday is the fifth day of the month. The day after Friday is Saturday, which is the middle of the week The sixth day of the month is also the fifth day of the month."
Now, ChatGPT will still make mistakes from time to time, and the "Prompt Project" cannot be easily abandoned.
But the cost of fine-tuning large models will eventually come down, and the AI that will give you prompts is probably just around the corner.
References:
https://twitter.com/drjimfan/status/1600884299435167745?s=46&t=AkG63trbddeb_vH0op4xsg
https://twitter.com/SergeyI49013776/ status/1598430479878856737
Special thanks:
https://mp.weixin.qq.com/s/seeJ1f8zTigKxWEUygyitw
The above is the detailed content of How long can I get with an annual salary of 2 million? The 'prompt engineer” who became famous because of ChatGPT is facing unemployment at the speed of light. For more information, please follow other related articles on the PHP Chinese website!