Home > Technology peripherals > AI > body text

The risk of out-of-control AI triggered a sign-raising protest at Meta, and LeCun said the open source AI community is booming

WBOY
Release: 2023-10-10 21:05:03
forward
691 people have browsed it

In the field of AI, people have long been divided on the choice of open source and closed source. However, in the era of large-scale models, open source has quietly emerged as a powerful force. According to a previously leaked internal document from Google, the entire community is rapidly building models similar to OpenAI and Google's large-scale models, including open source models such as LLaMA around Meta

There is no doubt that , Meta is the core of the open source world and has been making continuous efforts to promote the cause of open source, such as the recently released Llama 2. However, as we all know, a big tree must have dead branches. Recently, Meta has been in some trouble because of open source.

Outside Meta’s San Francisco office, a group of protesters holding signs gathered together, Protest Meta's strategy of publicly releasing AI models, claiming that these released models cause the "irreversible proliferation" of potentially unsafe technologies. Some protesters even compared the large models released by Meta to "weapons of mass destruction."

These protesters call themselves "concerned citizens" and are led by Holly Elmore. According to LinkedIn, she is an independent advocate for the AI ​​Pause movement.

The risk of out-of-control AI triggered a sign-raising protest at Meta, and LeCun said the open source AI community is booming

What needs to be rewritten is: Image source: MISHA GUREVICH

If a model is proven not to be To be safe, she noted that the API can be turned off, such as by allowing users to access only large models through the API, as Google and OpenAI do. In contrast, Meta's LLaMA series of open source models provide model weights to public, which allows anyone with the appropriate hardware and expertise to copy and tweak the model themselves. Once model weights are released, the publishing company no longer has control over how the AI ​​is used

For Holly Elmore, releasing model weights is a dangerous strategy because anyone can modify them model, and these modifications cannot be undone. She believes that "the more powerful the model, the more dangerous this strategy is."

Compared with open source, large models accessed through APIs often have various security features, such as response filtering or passing specific Training to prevent the output of dangerous or aversive responses

If you can free up the model weights, it will be easier to retrain the model to jump over these "guardrails". This makes it more possible to exploit these open source models to create phishing software and conduct cyber attacks

What needs to be rewritten is: Image source: MISHA GUREVICHThe risk of out-of-control AI triggered a sign-raising protest at Meta, and LeCun said the open source AI community is booming

Because she believes that part of the problem is "insufficient security measures for model release" and that there needs to be a better way to ensure model security.

Currently, Meta has not commented on this. However, Yann LeCun, chief AI scientist at Meta, seemed to respond to the statement that "open source AI must be outlawed" and showed the flourishing open source AI startup community in Paris

There are many people who disagree with Holly Elmore, who believe that an open strategy for AI development is the only way to ensure trust in technology. The risk of out-of-control AI triggered a sign-raising protest at Meta, and LeCun said the open source AI community is booming

Some netizens said that open source has pros and cons. It can allow people to gain greater transparency and enhance innovation, but it will also face the risk of misuse (such as code) by malicious actors.

As expected, OpenAI was once again ridiculed, "It should return to open source." The risk of out-of-control AI triggered a sign-raising protest at Meta, and LeCun said the open source AI community is booming

There are many people who are worried about open sourceThe risk of out-of-control AI triggered a sign-raising protest at Meta, and LeCun said the open source AI community is booming

Peter S. Park, a postdoctoral fellow in artificial intelligence security at MIT, said that advanced AI will be widely released in the future Models will pose special problems because it is basically impossible to prevent misuse of AI models

However, Stella Biderman, executive director of EleutherAI, a nonprofit artificial intelligence research organization, said: "To So far, there is little evidence that the open source model has caused any specific harm. It is unclear whether simply placing a model behind the API will solve the security problem."

Biderman believes: "The basic elements for building an LLM have been disclosed in a free research paper, and anyone in the world can read the paper to develop their own model."

She also added: "Encouraging companies to keep model details secret may have serious adverse consequences for the transparency of research in the field, public awareness and scientific development, especially for independent researchers."

Although everyone has been discussing the impact of open source, it is still unclear whether Meta's method is really open enough and whether it can take advantage of open source.

Stefano Maffulli, executive director of the Open Source Initiative (OSI), said: "The concept of open source AI has not been properly defined. Different organizations use the term to refer to different things - Indicates varying degrees of "publicly available stuff," which can confuse people."

Maffulli pointed out that for open source software, the key issue is whether the source code is publicly available and accessible. Reproduction for any purpose. However, reproducing an AI model may require sharing training data, how the data was collected, training software, model weights, inference code, and more. Among them, the most important thing is that the training data may involve privacy and copyright issues

OSI has been working on giving an exact definition of "open source AI" since last year, and it is very likely that it will be An early draft will be released in the coming weeks. But no matter what, he believes that open source is crucial to the development of AI. “We can’t have trustworthy, responsible AI if AI is not open source,” he said.

In the future, the differences between open source and closed source will continue, but open source is unstoppable.


The above is the detailed content of The risk of out-of-control AI triggered a sign-raising protest at Meta, and LeCun said the open source AI community is booming. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!