Andrew Ng's ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

王林
Release: 2023-06-03 21:27:21
forward
763 people have browsed it

I didn’t expect that ChatGPT would still make stupid mistakes to this day?

Master Andrew Ng pointed it out at the latest class:

ChatGPT will not reverse words!

For example, let it reverse the word lollipop, and the output is pilollol, which is completely confusing.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

Oh, this is indeed a bit surprising.

So much so that after netizens who listened to the class posted on Reddit, they immediately attracted a large number of onlookers, and the post quickly reached 6k views.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

And this is not an accidental bug. Netizens found that ChatGPT is indeed unable to complete this task, and the results of our personal testing are also the same.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

##△Actual ChatGPT (GPT-3.5)
Even products including Bard, Bing, Wen Xinyiyan, etc. do not work.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

△Actual measurement Bard

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

△Actual measurement Wen Xinyi Yan
Some people followed up and complained that ChatGPT was terrible at handling these simple word tasks.

For example, playing the popular word game Wordle was a disaster and never got it right.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

Eh? Why is this?

The key lies in the token

The key to this phenomenon lies in the token. Large models often use tokens to process text because tokens are the most common character sequences in text.

It can be a whole word or a fragment of a word. Large models are familiar with the statistical relationships between these tokens and can skillfully generate the next token.

So when dealing with the small task of word reversal, it might just be flipping each token over instead of the letter.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

This is even more obvious in the Chinese context: a word is a token, or a word is a token.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

Regarding the example at the beginning, someone tried to understand the reasoning process of ChatGPT.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

For a more intuitive understanding, OpenAI even released a GPT-3 Tokenizer.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

For example, the word lollipop will be understood by GPT-3 as three parts: I, oll, and ipop.

Based on the summary of experience, some unwritten rules were born.

    1 token≈4 English characters≈three-quarters of a word;
  • 100 tokens≈75 words;
  • 1-2 sentences ≈30 tokens;
  • A paragraph ≈ 100 tokens, 1500 words ≈ 2048 tokens;
How words are divided also depends on the language. Someone has previously calculated that the number of tokens used in Chinese is 1.2 to 2.7 times that in English.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world


The higher the token-to-char (token to word) ratio, the higher the processing cost. Therefore, processing Chinese tokenize is more expensive than English.

It can be understood that token is a way for large models to understand the real world of humans. It's very simple and greatly reduces memory and time complexity.

But there is a problem with tokenizing words, which makes it difficult for the model to learn meaningful input representations. The most intuitive representation is that it cannot understand the meaning of the words.

At that time, Transformers had done corresponding optimization. For example, a complex and uncommon word was divided into a meaningful token and an independent token.

Just like "annoyingly" is divided into two parts: "annoying" and "ly", the former retains its own meaning, while the latter is more common.

This has also resulted in the amazing effects of ChatGPT and other large model products today, which can understand human language very well.

As for the inability to handle such a small task as word reversal, there is naturally a solution.

The simplest and most direct way is to separate the words yourself~

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

Or you can let ChatGPT do it step by step , first tokenize each letter.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

Or maybe let it write a program that reverses letters, and then the result of the program is correct. (dog head)

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

However, GPT-4 can also be used, and there is no such problem in actual testing.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

△Actual measurement GPT-4

In short, token is the cornerstone of AI’s understanding of natural language.

As a bridge for AI to understand human natural language, the importance of tokens has become increasingly obvious.

It has become a key determinant of the performance of AI models and the billing standard for large models.

There is even token literature

As mentioned above, token can facilitate the model to capture more fine-grained semantic information, such as word meaning, word order, grammatical structure, etc. In sequence modeling tasks (such as language modeling, machine translation, text generation, etc.), position and order are very important for model building.

Only when the model accurately understands the position and context of each token in the sequence can it predict the content better and correctly and give reasonable output.

Therefore, the quality and quantity of tokens have a direct impact on the model effect.

Starting this year, when more and more large models are released, the number of tokens will be emphasized. For example, the details of the exposure of Google PaLM 2 mentioned that it used 3.6 trillion tokens for training.

And many big names in the industry have also said that tokens are really crucial!

Andrej Karpathy, an AI scientist who switched from Tesla to OpenAI this year, said in his speech:

More tokens can enable models Think better.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

And he emphasized that the performance of the model is not determined solely by the parameter size.

For example, the parameter size of LLaMA is much smaller than that of GPT-3 (65B vs 175B), but because it uses more tokens for training (1.4T vs 300B), LLaMA is more powerful.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

With its direct impact on model performance, token is still the billing standard for AI models.

Take OpenAI’s pricing standard as an example. They charge in units of 1K tokens. Different models and different types of tokens have different prices.

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

In short, once you step into the field of AI large models, you will find that token is an unavoidable knowledge point.

Well, even token literature has been derived...

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

But it is worth mentioning that, what role does token play in the Chinese world? What it should be translated into has not been fully decided yet.

The literal translation of "token" is always a bit weird.

GPT-4 thinks it is better to call it "word element" or "tag", what do you think?

Andrew Ngs ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world

Reference link:
[1]https://www.reddit.com/r/ChatGPT/comments/13xxehx/chatgpt_is_unable_to_reverse_words/
[2]https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
[3]https://openai.com /pricing

The above is the detailed content of Andrew Ng's ChatGPT class went viral: AI gave up writing words backwards, but understood the whole world. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!