Home > Technology peripherals > AI > How CIOs can address the perceived risks posed by AI

How CIOs can address the perceived risks posed by AI

王林
Release: 2024-03-21 13:01:09
forward
471 people have browsed it

How CIOs can address the perceived risks posed by AI

Ask the average person what the biggest risks of AI are and their answers might include: AI will make us humans obsolete, Skynet will become a reality, make us humans extinct, deepfake creation Tools will be used by bad people to do bad things.

General CEOs believe that the biggest risk of AI is missed opportunities, especially because competitors may implement AI-based business capabilities earlier than they do.

As a CIO, you need to consider the actual AI risks and also foresee potential risks. Here's how to achieve this effectively.

Risks perceived by ordinary people

1. Will AI impact humans? Answer: This is not a risk, but a choice. Personal computers, then the Internet, then smartphones, all opened up opportunities for computer-augmented humans. AI can do the same, and business leaders can focus on building a stronger, more competitive business by using AI capabilities to augment and empower their employees.

They can, some will, others will use AI to automate tasks currently performed by the humans they employ.

Or, more likely, they will do both, neither will be better in an absolute sense, but they will be different. As a CIO, you must help communicate the company's intentions, whether AI is used to add employees or replace them.

Skynet is one of the possible AI futures that inspires shudders, but is also considered the most unlikely scenario. This is not because it is impossible to create killer robots, but because there is no good reason to create and invest in such destructive artificial intelligence.

In nature, preying on other organisms is one of the survival needs of most organisms. Predators chase their prey to ensure its survival and ability to reproduce. However, very few creatures other than humans would harm other species simply for fun. This behavior is rare and is usually an abnormal behavior caused by human intervention or environmental damage. The interdependence and balance in nature enable the relationship between prey and prey to be maintained. Every living thing plays an important role in this ecosystem. In addition to the fields of electricity and semiconductors, will we discover and have a firm will? It remains to be seen whether competition for resources among AIs will become so fierce that killer robot scenarios will become a problem we must face.

That’s because if AI competes with us in power and semiconductors, it’s less likely to waste resources building killer robots.

3. Deepfakes, yes, deepfakes are a problem, and as the cusp of the reality war, they are a problem that will only get worse, deepfake AI and deepfake detection AI will have to Improving faster and faster just to maintain each other's status.

So just as malware countermeasures evolved from standalone antivirus measures to industry-wide cybersecurity, we can expect a similar trajectory for deep fake countermeasures as the war on reality heats up.

AI Risks as Perceived by CEOs

CEOs who don’t want to become ex-CEOs anytime soon will spend considerable time and attention on some form of “TOWS” analysis (Threats, Opportunities) , weaknesses and strengths).

As a CIO, one of your most important responsibilities for a long time has been to help drive business strategy by connecting the dots, from IT-based capabilities to business opportunities (if your business is the first to leverage them) or threats (if a competitor exploits them first).

This was the case before the current AI craze took over the IT industry, that’s what “digital” was all about, and it’s even more true now.

Coupled with AI, CIOs have another layer of responsibility, which is how to integrate their new capabilities into the entire business.

Silent AI-Based Threats: Human-Created Weaknesses

There is another type of risk to worry about that gets little attention, called “man-made human vulnerabilities.”

Start from Daniel Kahneman’s thinking, fast and slow. In the book, Kahneman identifies two ways in which we think. When we think quickly, we use brain circuits that allow us to understand things at a glance, without delay, and with almost no effort. Thinking fast is also what we do when we "trust our guts."

When we think slowly, we're using a circuit that lets us multiply 17 by 53—a process that requires considerable concentration, time, and brainpower.

When it comes to AI, slow thinking is what expert systems do, and to that extent, what old-school computer programming does, thinking fast is what's most exciting about AI, and that's what neural networks are for.

In its current state of development, AI’s rapid thinking form is also prone to the same cognitive errors as trusting our intuition. For example:

Inferring causation from correlation: We all know we shouldn’t do this, however, it’s hard to stop ourselves from inferring causation when all our evidence is juxtaposed.

As it happens, what is called AI today largely consists of machine learning of neural networks, which involves inferring causation from correlation.

Back to Moderation: You watched The Great British Baking Show. You'll notice that whoever wins the Star Baker Award in one episode tends to bake worse in the next episode, which is the curse of the Star Baker.

It's just that it's not a curse, it's just randomness in action, every baker's performance follows a bell curve, when one wins Star Baker in a week, their performance has reached the bell shape One tail of the curve, the next time they bake, they are most likely to perform at average, not again at Star Baker Tail, because every time they bake, they are most likely to perform at average, not the winning tail.

There is no reason to expect machine learning AI to be immune to this fallacy; quite the contrary, faced with random process performance data points, we should expect AI to predict improvements after each poor outcome.

Then conclude that causation works.

No “show your work”: Well, it’s not your job, it’s the AI’s job. There is some active research into developing so-called "explainable AI", which is necessary.

Suppose you assigned an employee to evaluate a possible business opportunity and recommend a course of action to you, and they would do so and you would ask, "Why do you think that?" Any competent person Employees look forward to this question and are ready to answer it.

Until “explainable AI” becomes a feature rather than a wish list, AI is less capable in this area than the workers many businesses hope they replace—they can’t explain their own thoughts.

Phrases to Ignore

No doubt you have heard someone claim, in the context of AI, that “a computer will never know .

They are wrong, this has been a popular assertion since I first started in this business, and since then it has been obvious that computers can do anything, no matter which x you choose, And do it better than us.

The only question is how long we have to wait for it to happen.

The above is the detailed content of How CIOs can address the perceived risks posed by AI. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template