Home > Technology peripherals > AI > Five major AI risks in the age of ChatGPT and generative AI

Five major AI risks in the age of ChatGPT and generative AI

PHPz
Release: 2023-04-10 14:41:04
forward
1628 people have browsed it

Five major AI risks in the age of ChatGPT and generative AI

For years, we have been quietly interacting with artificial intelligence through voice assistants, social media, search algorithms, facial recognition on our phones, and more. But with the emergence of generative AI such as ChatGPT, artificial intelligence has taken front and center.

Suddenly, we begin to witness artificial intelligence viscerally, and are amazed by what we see and hear. Artificial intelligence no longer feels like “something that’s going to happen someday.” It is now “here and ready to change the world.”

Change brings risks. ChatGPT should be a shocking wake-up call for businesses and management teams that have not yet developed an AI risk management plan. This article explores the risks associated with widespread implementation of artificial intelligence.

The following are the 5 major artificial intelligence risks that business leaders should pay attention to:

1. Disruption risk

Artificial intelligence will disrupt existing business models and technologies with unprecedented technology market. The most obvious example is ChatGPT itself. Who would have thought that Google's status as the undisputed search champion would be challenged so suddenly and precariously? It seems that just a year or two ago, most people imagined that artificial intelligence would disrupt the reliance on relatively low-skilled people. Workforce industries like trucking and customer service, or in the worst cases, disrupt highly structured jobs like financial trading and radiology. We now know that creative industries such as media and advertising, as well as personalized service industries such as teaching and financial advice, and even elite skill areas such as pharmaceutical research and development and computer science, are at risk.

According to a March 2023 report from Goldman Sachs, generative AI like ChatGPT could eliminate as many as 300 million jobs globally, including 19 of existing jobs in the United States. %. Regardless of your industry or career, it is almost certain that businesses will face significant changes in the coming years. Unlike past technological disruptions, the stakes this time may truly be life or death.

2. Cybersecurity Risks

Protecting organizational data, systems and personnel from hackers and other saboteurs has become an increasingly serious issue for business leaders. In 2022, the number of attacks increased by 38%, with the average organization experiencing more than 1,000 attacks per week, and the average cost per data breach ballooned to more than $4 million.

Artificial intelligence will exponentially exacerbate this challenge. Imagine when a sophisticated AI like ChatGPT sends emails to employees that appear to be from the boss, using information typically only known to the boss, and even using the boss’s writing style. Such a phishing attack would have How big.

Since at least 2019, there have been reports of the use of deepfakes such as voice cloning in online scams. As artificial intelligence improves and diversifies every day, the problem of cyber risk management is only getting worse.

If you think firewalls and other today's network defense technologies can save you, think again. AI will help bad guys find the weakest link in a defense and then work around the clock until a breakthrough is found.

3. Reputation risk

When ChatGPT first appeared in the public eye, Google executives initially cited "reputational risk" as a reason and stated that they would not immediately launch artificial intelligence to compete with it. But it retracted its statement days later and announced the launch of Bard. Mistakes and embarrassments at Bing and other areas of generative AI have proven that Google's initial concerns were well-founded.

The public is waiting and watching. When AI behaves in a way that is inconsistent with values, it can lead to a public relations disaster. Emerging forms of artificial intelligence are already behaving like racist, misogynistic monsters, leading to false arrests and amplifying bias in employee hiring.

Sometimes, artificial intelligence can destroy human relationships. According to Forrester, 75% of consumers are frustrated by customer service chatbots and 30% have taken their business elsewhere after poor AI-powered customer service interactions. Artificial intelligence is still young and prone to errors. However, despite the high stakes, we expect that many commercial organizations will fully understand the reputational risks involved when deploying AI.

4. Legal risks

The federal government is preparing to deal with the social challenges brought about by the rise of artificial intelligence. In 2022, the Biden administration unveiled a blueprint for the Artificial Intelligence Bill of Rights to protect privacy and civil liberties. In 2023, the National Institute of Standards and Technology released an artificial intelligence risk management framework to help corporate boards and other organizational leaders deal with artificial intelligence risks. The Algorithmic Accountability Act of 2022 is still just a bill aimed at creating transparency into a wide range of automated decision-making mechanisms. This is just federal legislation. In 2022 alone, no fewer than 17 states have proposed legislation to govern artificial intelligence, targeting facial recognition, hiring bias, addictive algorithms and other AI use cases. For multinational corporations, the EU’s proposed Artificial Intelligence Bill aims to ban or moderate the use of biometrics, psychological manipulation, exploitation of vulnerable groups, and social credit scoring.

New regulations are coming soon, possibly within 2023. Businesses face risks beyond compliance. If something goes wrong with a product or service that uses AI, who will be held liable: the product or service provider? The AI ​​developer? The data provider? Or the corporate entity? At the very least, this requires providing transparency into how AI makes decisions to comply with the new law Transparency provisions in .

5. Operational Risk

The final area of ​​AI risk is perhaps the most obvious, but in some ways the most dangerous. What happens if an employee accidentally misuses ChatGPT, causing trade secrets to be leaked? What happens when AI doesn’t work as expected? The negative impacts of adopting AI too quickly can be huge.

ChatGPT is the most famous example of advanced artificial intelligence today, with the world testing it and reporting its shortcomings every day. But AI used by enterprises may not enjoy this benefit. When AI tells you to double down on a specific supplier, material, or product, but gets it wrong—how do you know what to expect?

IBM’s Watson once suggested incorrect and dangerous treatments for cancer patients method. Britain's Tyndaris Investments is being sued by Hong Kong tycoon Charles Li after its hedge fund AI lost up to $20 million a day. Also, an out-of-control Tesla hits and kills a pedestrian? It’s the business leader’s job to be aware of this operational risk and manage it.

The above is the detailed content of Five major AI risks in the age of ChatGPT and generative AI. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template