As 2022 comes to an end, OpenAI released a chatbot called ChatGPT, which quickly became popular on the Internet.
# Less than two weeks after launch, more than one million people have signed up for an online trial. Users can just enter text and instantly get access to incredible passages of articles, stories and poems.
#It’s so well written that some people use it to write date opening lines on Tinder (“Do you mind if I sit here? You do hip thrusts” The look makes my legs a little weak.")
Not only that, but to the considerable shock of educators around the world, students are starting to use ChatGPT to write a term paper. Others are using it to try to reinvent search engines. Suddenly, the whole world was discussing the magic of ChatGPT.
# Still, Marcus says we can’t put too thick a filter on our chatbots.
#While ChatGPT seems to know everything, it is also prone to error. In an interview, Marcus said that ChatGPT is the same as before and that the system "is still unreliable, still doesn't understand the real world, still doesn't understand the psychological world and is still full of bugs."
In other words, ChatGPT often makes up nonsense; quite a few of what it says are simply not true.
For example, under the guidance of users, ChatGPT can say that fried dough sticks are very suitable for surgery because "they are small and can achieve higher results during surgery." Precision and control, reducing the risk of complications and improving the overall outcome of the surgery."
Chatbots spew such nonsense that the famous website Stack Overflow has temporarily banned computer-generated answers.
# And the mistakes are endless. Although ChatGPT often makes adjustments based on user feedback, a few weeks after the product was released, many netizens were still speechless with its responses:
## Similar mistakes happen frequently, and even OpenAI CEO Sam Altman has to admit the reality:
ChatGPT still has many limitations, but it is enough to create the illusion of greatness.
#It is still too early to rely on ChatGPT to complete important tasks. There is still a lot we need to do to improve robustness and authenticity.
#In short, although ChatGPT may sound as sci-fi as a computer in Star Trek, at present, people cannot fully trust it.
#Of course, ChatGPT is a gift for AI enthusiasts in 2022. What about 2023?
In 2023, what Silicon Valley and the entire world are eagerly awaiting is GPT-4.
People who have actually tried GPT-4 are deeply impressed by the product. According to some rumors, GPT-4 will be released in the spring of 2023. By then, it will eclipse ChatGPT; for sure, more people will be talking about it.
In many ways, expectations for GPT-4 are very high:
##Nick Davidov, founder of venture capital firm DVC, said: The emergence of GPT-4 will bring "an economic impact similar to the new coronavirus epidemic." The rapid dissemination and use of GPT-4 can "rapidly increase the productivity of hundreds of millions of knowledge workers."
Technically speaking, GPT-4 will have more parameters, more processors and memory, and will be trained with more data .
GPT-1 was trained with 4.6GB of data, and by GPT-3, the amount of data soared directly to 750GB. It can be seen that the training volume of GPT-4 will be even more amazing, and it will even learn most of the entire Internet.
#OpenAI knows that greater training volume means better output. With each iteration, GPT becomes more and more human-like in its performance. For GPT-4, it may evolve into a performance monster.
#But will it solve the problems encountered before? Marcus still has questions about this.
While GPT-4 certainly looks like it will be smarter than its predecessor, there are still issues with its internal architecture.
Marcus said that he suspected that people would have a sense of deja vu when it came to GPT-4: first it became popular all over the Internet, and then a few days later , it was found that many problems still exist.
#According to current information, GPT-4 is basically the same as GPT-3 in architecture. If so, it can be expected that some fundamental problems remain unresolved: Chatbots still lack an internal model of how the world works.
# Therefore, GPT-4 cannot understand things at an abstract level. It might be better at helping students write essays, but it still won’t truly understand the world, and the characteristics of the machine will still be revealed between the lines of its answers.
So, although the AI community is full of joy for the arrival of GPT-4, Marcus gave 7 less positive predictions.
1. GPT-4 will still make all kinds of stupid mistakes like its predecessors. It might sometimes perform a given task well and sometimes fail, but you can't predict in advance which situation is about to arise. 2. GPT-4’s reasoning in physics, psychology and mathematics is still unreliable. It may be able to solve some projects that have previously failed to challenge successfully, but it will still be helpless when faced with longer and more complex scenarios. For example, when asked a medical question, it either refuses to answer or occasionally utters nonsense that sounds reasonable but is dangerous. Although it has devoured a huge amount of content on the internet, it is not trustworthy and complete enough to provide reliable medical advice. 3. Fluent hallucinations will remain common and easily induced. That said, large language models are still a tool that can easily be used to create information that sounds reasonable but is completely wrong. 4. GPT-4’s natural language output still cannot be served to downstream programs in a reliable way. Developers who use it to build virtual assistants will find themselves unable to reliably map user language to user intent. 5. GPT-4 itself will not be a general artificial intelligence that can solve any task. Without external assistance, it can neither defeat Meta's Cicero in Diplomacy nor drive a car reliably, nor can it drive Optimus Prime in "Transformers" or Rosie in "The Jetsons." "So versatile.6. The "connection" between "what humans want" and "what machines do" is still a key and unresolved issue. GPT-4 will still have no control over its output, some recommendations will be surprisingly bad, and examples of masked bias will be discovered within days or months.
7. When AGI (Artificial General Intelligence) is realized, large language models like GPT-4 may become part of the final solution, but only part of it. Simply "scaling", that is, building a larger model until it absorbs the entire Internet, will prove useful to a certain extent. But general artificial intelligence that is trustworthy and consistent with human values will definitely come from more structured systems. It will have more built-in knowledge and include explicit reasoning and planning tools. These are all lacking in the current GPT system.
Marcus believes that within a decade, perhaps less, the focus of artificial intelligence will shift from scaling up large language models to integrating with a broader range of technologies.
Cool stuff is always fun, but that doesn’t mean it can lead us toward believable general artificial intelligence.
In this regard, Marcus predicts that what we need in the future is a new architecture that can take explicit knowledge and world models as the core.
Reference: https://garymarcus.substack.com/p/what-to-expect-when-youre-expecting
The above is the detailed content of Is the performance monster coming after ChatGPT? Marcus' 7 'dark' predictions: GPT-4 will not bring AGI. For more information, please follow other related articles on the PHP Chinese website!