search
HomeTechnology peripheralsAIGary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000

​“If someone says (deep learning) has hit a wall, all they have to do is make a list of things that deep learning can’t do. In 5 years, we’ll be able to prove that deep learning does it.”

On June 1st, the reclusive Geoffrey Hinton was a guest on UC Berkeley professor Pieter Abbeel’s podcast. The two had a 90-minute conversation, ranging from Masked auto-encoders, AlexNet to spiking neural networks, etc. wait.

Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000

In the show, Hinton clearly questioned the view that "deep learning has hit a wall."

The statement "Deep learning has hit a wall" comes from an article in March by the well-known AI scholar Gary Marcus. To be precise, he believes that "pure end-to-end deep learning" has almost come to an end, and the entire AI field must find a new way out.

Where is the way out? According to Gary Marcus, symbol processing has a great future. However, this view has never been taken seriously by the community. Hinton even said before: "Any investment in symbol processing methods is a huge mistake."

Hinton's public "rebuttal" in the podcast obviously caused a stir Attention Gary Marcus.

Just a dozen hours ago, Gary Marcus sent an open letter to Geoffrey Hinton on Twitter:

Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000

The letter said this : "I noticed that Geoffrey Hinton was looking for some challenging targets. I actually wrote such a list with the help of Ernie Davis, and last week I issued a $100,000 bet to Musk."

What’s going on with Musk here? The reason starts with a tweet at the end of May.

A Hundred Thousand Dollar Bet with Musk

For a long time, people have understood AGI to be the kind of AI described in movies such as A Space Odyssey (HAL) and Iron Man (JARVIS). Unlike current AI, which is trained for a specific task, AGI, more like the human brain, can learn how to complete a task.

Most experts believe AGI will take decades to achieve, while some even believe the goal will never be achieved. In a survey of experts in the field, it was estimated that there would be a 50% chance of achieving AGI by 2099.

In contrast, Musk appeared more optimistic and even expressed publicly on Twitter: "2029 is a critical year. I will be surprised if we have not achieved AGI by then. Hope Mars The same goes for the people on the Internet."

Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000

Gary Marcus, who expressed his disapproval, quickly asked: "How much are you willing to bet?"

Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000

Although Musk did not reply to this question, Gary Marcus continued to say that he could set up a game in Long Bets for an amount of US$100,000.

In Gary Marcus’ view, Musk’s relevant views are not reliable: “For example, you said in 2015 that it would take two years to achieve fully autonomous cars. Since then, you have said almost every year They all say the same thing, but fully autonomous driving has not yet been realized."

He also wrote down five standards to test whether AGI can be realized in his blog, as a bet:

  • In 2029, AI will not be able to read a movie and tell you exactly what is happening (who the characters are, what their conflicts and motivations are, etc.);
  • In 2029, AI will not be able to read a novel and reliably tell you Answer questions about plot, characters, conflict, motivations, and more;
  • In 2029, AI will not be able to serve as a competent chef in any kitchen;
  • In 2029, AI will not be able to communicate through natural language specification or with Reliably build over 10,000 lines of bug-free code with the interaction of non-expert users (gluing code together from existing libraries doesn’t count);
  • In 2029, AI cannot start from mathematical literature written in natural language Obtain any evidence from it and convert it into a symbolic form suitable for symbolic verification.

Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000

"Here's my advice if you (or anyone else) manage to do it in 2029 At least three, even if you win. Deal? How about one hundred thousand dollars?"

With the pursuit of more people, the amount of this bet has increased to $500,000. However, so far, Musk has not responded.

Gary Marcus: AGI is not as “near” as you think

On June 6, Gary Marcus published an article in Scientific American, reiterating his point of view: AGI is not “near” In front of you.

Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000

To the average person, it seems like huge progress is being made in the field of artificial intelligence. Among media reports: OpenAI’s DALL-E 2 seems to be able to convert any text into an image, GPT-3 is omniscient, and DeepMind’s Gato system, released in May, performs well on every task… One senior DeepMind executive even boasted of starting the quest for artificial general intelligence (AGI), AI with the same level of intelligence as humans...

Don’t be fooled. Machines may one day be as smart as humans, maybe even smarter, but that's far from it today. There is still a lot of work to be done to create machines that truly understand and reason about the real world. What we really need now is less posturing and more basic research.

To be sure, AI is indeed making progress in some areas—synthetic images look increasingly realistic, speech recognition can work in noisy environments—but we are no closer to universal human-level AI There is still a long way to go, for example, artificial intelligence cannot yet understand the true meaning of articles and videos, nor can it handle unexpected obstacles and interruptions. We still face the challenge that AI has had for years – making AI reliable.

Taking Gato as an example, given the task of adding a title to an image of a pitcher throwing a baseball, the system returns three different answers: "A baseball player pitches on a baseball field", "A man throws a baseball to A pitcher throws a baseball on a baseball field" and "A baseball player bats and a catcher in a baseball game." The first answer is correct, while the other two seem to contain additional players not visible in the image. This suggests that the Gato system does not know what is actually in the image, but rather what is typical of roughly similar images. Any baseball fan can tell this is the pitcher who just threw the ball - while we'd expect a catcher and batter nearby, they're conspicuously absent from the image.

Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000

Similarly, DALL-E 2 will confuse these two positional relationships: "the red cube on top of the blue cube" and "the red cube on top of the red cube" Blue Cube". Similarly, the Imagen model released by Google in May could not distinguish between "astronaut riding a horse" and "astronaut riding a horse."

Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000

You may still find it a little funny when a system like DALL-E goes wrong, but there are some AI systems that can cause very serious problems if they go wrong. For example, a self-driving Tesla recently drove directly toward a worker holding a stop sign in the middle of the road, requiring the human driver to intervene before it could slow down. The self-driving system could recognize humans and stop signs on their own, but failed to slow down when encountering unusual combinations of the two.

So, unfortunately, AI systems remain unreliable and have trouble adapting quickly to new environments.

Gato performed well on all tasks reported by DeepMind, but rarely as well as other contemporary systems. GPT-3 often writes fluent prose, but it still has difficulty mastering basic arithmetic, and it has too little understanding of reality. It is easy to produce incredible sentences such as "Some experts believe that eating socks can help the brain change its state." .

The problem behind this is that the largest research teams in the field of artificial intelligence are no longer academic institutions, but large technology companies. Unlike universities, businesses have no incentive to compete fairly. Their new paper was released through the press without academic review, leading to media coverage and sidestepping peer review. The information we get is only what the company itself wants us to know.

In the software industry, there is a special word that represents this business strategy "demoware", which means that the design of the software is suitable for display, but it is not necessarily suitable for the real world.

And AI products marketed in this way either cannot be released smoothly, or they are a mess in reality.

Deep learning improves the ability of machines to identify data patterns, but it has three major flaws: the learned patterns are superficial rather than conceptual; the results produced are difficult to interpret; and it is difficult to generalize. As Harvard computer scientist Les Valiant pointed out: "The core challenge of the future is to unify the form of AI learning and reasoning."

Currently, companies are pursuing exceeding benchmarks rather than creating new ideas. Some technologies push for small improvements instead of stopping to think about more fundamental issues.

We need more people to ask basic questions such as "How to build a system that can learn and reason at the same time" instead of pursuing gorgeous product displays.

The debate about AGI is far from reaching the end, and other researchers are joining in. Researcher Scott Alexander pointed out in his blog that Gary Marcus is a legend, and what he has written in the past few years is more or less completely accurate, but it still has its value.

For example, Gary Marcus had previously criticized some problems of GPT-2. Eight months later, when GPT-3 was born, these problems were solved. But Gary Marcus did not show mercy to GPT-3, and even wrote an article: "OpenAI's language generator does not know what it is talking about."

Essentially speaking, one point of view is currently correct. : "Gary Marcus made fun of large language models as a gimmick, but then these models will get better and better, and if this trend continues, AGI will soon be realized."​

The above is the detailed content of Gary Marcus publicly shouted to Hinton and Musk: Deep learning is hitting a wall, I bet $100,000. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
From Hitler's Bunker To AI Boardrooms: Why Moral Courage MattersFrom Hitler's Bunker To AI Boardrooms: Why Moral Courage MattersJul 22, 2025 am 11:13 AM

The comparisons are unsettling — yet vital to explore. Back then, as today, individual acts of ethical bravery were crucial in safeguarding human autonomy when systems appeared too powerful for any one person to influence. Senior German officials saw

AI Will Make The 'Infinite Workday' Worse — Unless We Act DifferentlyAI Will Make The 'Infinite Workday' Worse — Unless We Act DifferentlyJul 22, 2025 am 11:12 AM

This infinite workday is fueled by the realities of global teams and hybrid work, along with unreasonable expectations of constant availability. The result? A workforce struggling to keep up: One in three employees says the pace of work over the past

AI Lacks Full Capability To Replace JournalismAI Lacks Full Capability To Replace JournalismJul 22, 2025 am 11:11 AM

At or near the top of the list sits newspapers. For two centuries, these print institutions held steady, and for anyone who values them, it’s deeply unsettling to witness their current state.Over the past decade, newsrooms have faced relentless cuts—

Ingeniously Using Psychology To Psych-Out AI To Do What You Want It To DoIngeniously Using Psychology To Psych-Out AI To Do What You Want It To DoJul 22, 2025 am 11:10 AM

The same goes for using psychologically shaped language in your prompts, which computationally stirs AI into doing things outside of its stipulated constraints and limits. One intriguing twist regarding this phenomenon is that psychiatrists, psychol

Can One Founder Launch A Million Dollar Product Brand With AI?Can One Founder Launch A Million Dollar Product Brand With AI?Jul 22, 2025 am 11:09 AM

This latest update is designed to minimize reliance on traditional teams. Pietra’s AI manages numerous aspects of online retail—supplier communication, inventory monitoring, marketing campaigns, order fulfillment, and performance analytics.“With this

Stanford Analyzes Worker Preferences For AIStanford Analyzes Worker Preferences For AIJul 22, 2025 am 11:08 AM

As humans, what exactly do we expect from our new digital coworkers?How should task delegation function in this evolving landscape?A recent Stanford study explored this very question by polling 15,000 workers across more than 100 job types to uncover

Is DeepSeek an AI platform?Is DeepSeek an AI platform?Jul 22, 2025 am 10:17 AM

DeepSeek is an AI platform, specifically a series of large language models developed by DeepSeek; 2. Supports tasks such as Q&A, content creation, programming assistance, logical reasoning and multilingual translation; 3. Provides multiple versions such as DeepSeek-V1 to V4, some open source and partly proprietary, can be used through API or local deployment, and belongs to the core AI language model platform, the same category as GPT, LLaMA, and Qwen.

Why ProSocial AI Is ProPlanetary AI. A Promise For Planetary HarmonyWhy ProSocial AI Is ProPlanetary AI. A Promise For Planetary HarmonyJul 21, 2025 am 11:10 AM

But what if these two realities aren't opposing forces? What if AI, guided by the right human intentions, could help us write a new chapter where technology and nature exist in harmony rather than conflict? The Planetary Health Imperative A recent c

See all articles

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment