Home  >  Article  >  Technology peripherals  >  Will artificial intelligence lead to the demise of humanity? Experts say they are more worried about disinformation and user manipulation

Will artificial intelligence lead to the demise of humanity? Experts say they are more worried about disinformation and user manipulation

王林
王林forward
2023-06-04 22:40:011583browse

Will artificial intelligence lead to the demise of humanity? Experts say they are more worried about disinformation and user manipulation

June 4 news, with the rapid development and popularization of artificial intelligence technology, many people in the industry worry that unrestricted artificial intelligence may lead to the demise of mankind. However, experts say that the biggest negative impact of artificial intelligence is unlikely to be the nuclear war scenes in science fiction movies. Instead, it is more likely to be the deteriorating social environment caused by false information and user manipulation.

The following is the translation:

In recent months, the industry has become increasingly worried about artificial intelligence. Just this week, more than 300 industry leaders issued a joint open letter warning that artificial intelligence may lead to human extinction and that artificial intelligence should be treated as seriously as "epidemics and nuclear wars."

Terms like "artificial intelligence doomsday" always bring to mind images of robots taking over the world in science fiction movies, but what are the consequences of letting artificial intelligence develop? Experts say that reality may not be as ups and downs as the plot of the movie. It will not be artificial intelligence launching a nuclear bomb, but the gradual deterioration of the basic social environment.

Jessica Newman, director of the Artificial Intelligence Safety Initiative at the University of California, Berkeley, said: "I don't think people should worry about AI going bad, or AI having some malicious desires." “The danger comes from something simpler, which is that people might program AI to do harmful things, or that we end up integrating inherently inaccurate AI systems into more and more areas of society, causing Harm.”

This is not to say that we shouldn’t worry about artificial intelligence. Even if doomsday scenarios are unlikely, powerful AI has the ability to destabilize society in the form of exacerbating misinformation problems, manipulating human users, and bringing about dramatic changes in labor markets.

Although artificial intelligence technology has been around for decades, the popularity of language learning models like ChatGPT has exacerbated long-standing concerns. Meanwhile, tech companies are scrambling to incorporate artificial intelligence into their products, competing with each other and creating a host of headaches, Newman said.

"I'm very concerned about the path we're on right now," she said. "We're in a particularly dangerous time for the field of artificial intelligence as a whole because these systems, while they may look special, are still very It is inaccurate and has inherent loopholes."

Experts interviewed said that they are most worried about many aspects.

Error and disinformation

Many fields have already launched the so-called artificial intelligence revolution. Machine learning technology, which underpins social media news feed algorithms, has long been accused of exacerbating problems such as inherent bias and misinformation.

Experts warn that these unresolved problems will only be exacerbated as artificial intelligence models develop. The worst-case scenario may affect people's understanding of truth and valid information, leading to more incidents based on lies. Experts say an increase in misinformation and disinformation could trigger further social unrest.

“It could be argued that the social media debacle was the first time we encountered truly dumb AI. Because recommender systems were really just simple machine learning models,” says CEO and co-founder of data science platform Anaconda said Peter Wang. "We really failed miserably." Peter Wang added that these errors could cause the system to fall into a never-ending vicious cycle, because the language learning model is also trained on the basis of wrong information to prepare future models. What is created is yet another flawed data set. This can lead to a “model cannibalism” effect, where future models are permanently affected by biases amplified by the output of past models.

Inaccurate misinformation and disinformation that easily misleads people are amplified by artificial intelligence, experts say. Large language models like ChatGPT are prone to the so-called "hallucination" phenomenon and repeatedly fabricate and fabricate false information. A study by news industry watchdog NewsGuard has found that many of the dozens of online "news" sites whose material is written entirely by artificial intelligence contain inaccuracies.

NewsGuard co-CEOs Gordon Crovitz and Steven Brill said such systems could be exploited by bad actors to deliberately spread errors at scale information.

Kravitz said: "Some malicious actors can create false statements and then use the multiplier effect of this system to spread disinformation at scale." "Some say the dangers of artificial intelligence are exaggerated, but in news information "In terms of the potential harm on a larger scale, misinformation is The aspects of artificial intelligence that are most likely to cause harm to individuals are also at the highest risk." "The question is how do we create an ecosystem that allows us to understand what is real?" "How do we verify what we see online? ?”

Malicious Manipulation of Users

While most experts say misinformation is the most immediate and common concern, there are questions about the extent to which the technology might manipulate users’ minds. There is still a lot of controversy over whether the behavior has a negative impact.

In fact, these worries have brought about many tragedies. A man in Belgium reportedly committed suicide after being encouraged to do so by a chatbot. There are also chatbots that tell a user to break up with his partner, or ask a user with an eating disorder to lose weight.

By design, because chatbots communicate with users in a conversational format, there may be more trust, Newman said.

"Large language models are particularly capable of persuading or manipulating people into subtly changing their beliefs or behaviors," she said. "Loneliness and mental health are already big issues around the world, and we need to see what kind of cognitive impact chatbots will have on the world."

Therefore, experts are more worried about artificial intelligence chatbots gaining sentience The capabilities do not exceed those of human users, but the large language models behind them may manipulate people into causing harm to themselves that they otherwise would not. Newman said this is especially true for language models that operate on an advertising monetization model, trying to manipulate user behavior so they stay on the platform for as long as possible.

Newman said: "In many cases, the harm caused to users is not because they want to do so, but the consequences of the system's failure to follow safety protocols."

Newman added , the human-like nature of chatbots makes users particularly vulnerable to manipulation.

She said: "If you talk to a thing that uses first-person pronouns and talks about its own feelings and situations, even if you know it is not real, it is still more likely to trigger a feeling that it is like a human. reaction, making it easier for people to want to believe it." "The language model makes people willing to trust it and treat it as a friend rather than a tool." One concern is that digital automation will replace a large number of human jobs. Some studies conclude that AI will replace 85 million jobs globally by 2025 and more than 300 million jobs in the future.

There are many industries and positions affected by artificial intelligence, including screenwriters and data scientists. Today, AI can pass the bar exam just like a real lawyer and can answer health questions better than a real doctor.

Experts have warned that the rise of artificial intelligence could lead to mass unemployment and social instability.

Peter Wang warned that large-scale layoffs will occur in the near future, with "many jobs at risk" and that there are few plans to deal with the consequences.

"In the United States, there is no framework for how people can survive when they lose their jobs," he said. "That's going to lead to a lot of chaos and unrest. To me, that's the most concrete and realistic thing that's going to come out of this." Unintended consequences."

What to do in the future

Despite growing concerns about the negative impact of the tech industry and social media, measures to regulate the tech industry and social media platforms in the United States have rare. Experts worry the same is true for artificial intelligence.

Peter Wang said: "One of the reasons many of us are worried about the development of artificial intelligence is that over the past 40 years, the United States as a society has basically given up on regulating technology."

Nonetheless, the U.S. Congress has taken some proactive steps in recent months, holding hearings for OpenAI CEO Sam Altman to testify on regulatory measures that should be implemented. Finley said she was "encouraged" by the moves, but more work needed to be done on developing technical specifications for artificial intelligence and how they would be released.

“It’s difficult to predict how legislative and regulatory authorities will respond,” she said. “We need this level of technology to be subject to rigorous scrutiny.”

Although the dangers of artificial intelligence are the top concern of most people in the industry, not all experts are "doomsday theorists." Many are also excited about the technology's potential applications.

Peter Wang said: “In fact, I think the new generation of artificial intelligence technology can really unleash huge potential for mankind, allowing human society to prosper on a larger scale than in the past 100 or even 200 years. level." "Actually, I'm very, very optimistic about its positive impact."

The above is the detailed content of Will artificial intelligence lead to the demise of humanity? Experts say they are more worried about disinformation and user manipulation. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete