Home > Technology peripherals > AI > body text

The Turing Award winners are arguing, LeCun: The AI ​​extinction theory of Bengio, Hinton, etc. is absurd

PHPz
Release: 2023-10-30 20:17:14
forward
1414 people have browsed it

Regarding the issue of AI risks, big bosses from all walks of life also have different opinions. Someone took the lead in signing a joint letter calling on AI laboratories to immediately suspend research. The three giants of deep learning, Geoffrey Hinton, Yoshua Bengio, etc. all support this view.

Just in recent days, Bengio, Hinton and others issued a joint letter "Managing Artificial Intelligence Risks in an Era of Rapid Development", calling on researchers to take urgent measures before developing AI systems. Governance measures focus on safety and ethical practices, calling on governments to take action to manage the risks posed by AI.

The article mentions some urgent governance measures, such as involving national institutions to prevent people from misusing AI. To achieve effective regulation, governments need to have a comprehensive understanding of AI developments. Regulators should take a series of measures, such as model registration, effective protection of whistleblowers, and monitoring of model development and supercomputer use. Regulators will also need access to advanced AI systems before deployment to assess their dangerous capabilities.

In May of this year, a US-based non-profit organization, the Center for the Safety of Artificial Intelligence, issued a statement warning that artificial intelligence should be viewed as an extinction risk to humanity. Like an epidemic. The statement was supported by some, including Hinton and Bengio. Not only that, but going back a little further, in May this year, the Center for the Security of Artificial Intelligence, a US non-profit organization, issued a statement warning that artificial intelligence should be regarded as having the same risk of annihilating humanity as epidemics, and the same people who supported this statement Including Hinton, Bengio and others

In May this year, Hinton resigned from Google in order to be able to freely discuss the risks brought by artificial intelligence. In an interview with the New York Times, he said: "Most people think that the harm of artificial intelligence is still far away. I used to think so too, and it may take 30 to 50 years or even longer. But now, my thinking has Changed."

In the eyes of AI tycoons such as Hinton, managing the risks brought by artificial intelligence is an urgent task

Despite this, Yann LeCun, as one of the important figures in the field of deep learning, is optimistic about the development of artificial intelligence. He has objections to signing a joint letter on the risks of artificial intelligence and believes that the development of artificial intelligence does not pose a threat to human beings

Just now, in an exchange with X users, LeCun said: Some questions from netizens about AI risks were answered.

The Turing Award winners are arguing, LeCun: The AI ​​extinction theory of Bengio, Hinton, etc. is absurd

The netizen was asking about LeCun's views on the article "'This is his climate change': The experts helping Rishi Sunak seal his legacy" , the article believes that the joint letter issued by Hinton, Bengio and others has changed people's view of AI, from initially viewing AI as an auxiliary assistant to viewing AI as a potential threat. The article goes on to say that in recent months, observers have detected increasing sentiment in the UK that AI will cause the end of the world. In March this year, the British government published a white paper promising not to stifle innovation in the field of AI. Yet just two months later, the UK is talking about putting guardrails on AI and urging the US to accept its plans for global AI rules.

The Turing Award winners are arguing, LeCun: The AI ​​extinction theory of Bengio, Hinton, etc. is absurd

Article link: https://www.telegraph.co.uk/business/2023/09/23/Helios Artificial Intelligence Security Summit :Sunak and AI experts/ (Note: The provided content is already in Chinese. Therefore, there is no need to rewrite it.)

LeCun’s response to this article is that he does not want the UK to be fatalistic about AI Concerns about risks will spread to other countries

The Turing Award winners are arguing, LeCun: The AI ​​extinction theory of Bengio, Hinton, etc. is absurd

After that, there was the communication between LeCun and X users that we mentioned earlier. The following is all of LeCun Answer:

"Altman, Hassabis, and Amodei are engaged in large-scale corporate lobbying. They are trying to regulate the AI ​​industry. And you, Geoff, and Yoshua are lobbying those companies Ban those who open up AI research from providing "ammunition."

If your fear campaign succeeds, it will inevitably lead to what you and I believe is a catastrophic outcome: a handful of companies will control AI

The vast majority of academic colleagues have a very supportive attitude towards open AI research and development. Few people believe the doomsday scenarios you promote. You, Yoshua, Geoff and Stuart are the only exceptions

Like many, I am a big supporter of open AI platforms because I believe in the importance of integrating various forces: People’s creativity, democracy, market forces, and product regulation. I also learned that we have the ability to produce AI systems that are safe and under our control. I have made specific suggestions that I hope will push people towards making good decisions

Your writing gives the feeling that AI is a natural phenomenon and we have no control over its development . But in reality this is not the case. The reason why AI can make progress is because of everyone in human society. Each of us has the power to create the right thing. Calling for regulation of R&D effectively assumes that these people and the organizations they work for are incompetent, reckless, self-destructive, or evil. But that’s not the case

I’ve come up with many arguments to prove that the doomsday scenario you fear is ridiculous. I won't go into details here. But the main point is that if strong AI systems are driven by goals, including guardrails, then they will be safe and controllable because they set those guardrails and goals. (Current autoregressive LLM is not goal-driven, so we should not infer from the weaknesses of autoregressive LLM)

Regarding open source, the effects of your activities will be the exact opposite of what you are after. In the future, AI systems will become the treasure house of all human knowledge and culture. What we need is an open source and free platform so that everyone can contribute to it. Openness is the only way to make AI platforms reflect the full range of human knowledge and culture. This requires that contributions to these platforms be crowdsourced, similar to Wikipedia. Unless the platform is open, this won't work.

If open source artificial intelligence platforms are regulated, another situation will arise where a few companies will control the artificial intelligence platform and thus control all people’s digital dependencies. What does this mean for democracy? What does it mean for cultural diversity? This is what keeps me awake all night

Under LeCun’s tweet, many people also expressed support for his views

Irina Rish, a professor in the Department of Computer Science and Operations Research at the University of Montreal, is also a core member of the Mila-Quebec AI Institute. She said that perhaps researchers who support open source AI should no longer be silent and should lead the development of emerging AI.

The Turing Award winners are arguing, LeCun: The AI ​​extinction theory of Bengio, Hinton, etc. is absurd

e/acc founder Beff Jezos also said in the comment area that such comments are important and what people need.

The Turing Award winners are arguing, LeCun: The AI ​​extinction theory of Bengio, Hinton, etc. is absurd

Netizens said that initially, discussing security issues can enrich everyone’s imagination of future technology, but sensational science fiction novels should not lead to monopoly policies. Appear.

Podcast host Lex Fridman has more to look forward to from this debate.

The Turing Award winners are arguing, LeCun: The AI ​​extinction theory of Bengio, Hinton, etc. is absurd

Discussions about AI risks will also affect the future development of AI. When an idea steals the spotlight, people follow it blindly. Only when the two parties can continue to conduct rational discussions can the "true face" of AI be truly seen.

The above is the detailed content of The Turing Award winners are arguing, LeCun: The AI ​​extinction theory of Bengio, Hinton, etc. is absurd. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template