Is artificial intelligence (AI) trustworthy?

王林
Release: 2023-04-12 12:37:06
forward
2500 people have browsed it

Is artificial intelligence (AI) trustworthy?

Artificial intelligence is more artificial than intelligence

In June 2022, Microsoft released the Microsoft Responsible Artificial Intelligence Standard v2, with the purpose of "defining responsible artificial intelligence" product development needs”. Perhaps surprisingly, the document only mentions one kind of bias in AI, namely that Microsoft's algorithm developers need to be aware of issues that may arise from users who rely too much on AI (also known as "automation discrimination").

In short, Microsoft seems to be more concerned about what users think of its products than how the products actually affect users. This is good business responsibility (don’t say anything negative about our products), but poor social responsibility (there are many examples of algorithmic discrimination negatively impacting individuals or groups of individuals).

There are three major unresolved issues with commercial artificial intelligence:

  • Hidden biases causing false results;
  • The possibility of abuse by users or attackers;
  • Algorithms return so many false positives that they negate the value of automation.

Concerns in academia

When artificial intelligence was first introduced into cybersecurity products, it was described as a protective silver bullet. There is no doubt that AI has its value, but with some faulty algorithms, hidden discrimination, and the abuse of AI by criminals, and even privacy snooping by law enforcement and intelligence agencies, the voice against AI is getting stronger and stronger.

According to "Scientific American" on June 6, 2022, the problem is the commercialization of a science that is still developing:

The largest research in artificial intelligence Teams are not found in academia but in business. In academia, peer review is king. Unlike universities, businesses have no incentive to compete fairly. Rather than submitting new academic papers for academic review, they engage journalists through press releases and leapfrog the peer review process. We only know what companies want us to know.

--Gary Marcus, professor of psychology and neuroscience at New York University

The result is that we only hear the positive aspects of artificial intelligence, but not the negative aspects of artificial intelligence. negative aspects.

Emily Tucker, executive director of the Center for Privacy and Technology at Georgetown Law School, came to a similar conclusion: "Starting today, our center will stop using the terms 'artificial intelligence', 'AI' and 'artificial intelligence' in our work. ' and 'machine learning' to expose and mitigate the harms of digital technology in the lives of individuals and communities... One of the reasons tech companies are so successful in twisting the Turing Test as a strategic means of accessing capital is governments' desire to The ubiquitous supervisory power granted by technology. This supervisory power is convenient and relatively cheap to exercise, and can be obtained through procurement processes that circumvent democratic decision-making or supervision."

In short, for The pursuit of profit hinders the scientific development of artificial intelligence. Faced with these concerns, we need to ask ourselves whether the AI in our products can be trusted to output accurate information and unbiased judgments, rather than being misused by people, criminals, or even governments.

The Failure of Artificial Intelligence

Case 1: A Tesla self-driving car drove directly toward a worker holding a stop sign, slowing down only when the driver intervened. The reason is that AI is trained to recognize humans and recognize stop signs, but it is not trained to recognize humans carrying stop signs.

Case 2: On March 18, 2018, an Uber self-driving car hit and killed a pedestrian pushing a bicycle. According to NBC at the time, the AI was unable to "classify an object as a pedestrian unless the object was close to a crosswalk."

Case 3: During the UK’s 2020 COVID-19 lockdown, students’ test scores were judged by artificial intelligence algorithms. About 40% of students received significantly lower grades than expected. This is because the algorithm places too much emphasis on the historical results of each school. As a result, students in private schools and previously high-performing public schools receive a large scoring advantage over other schools.

Case 4: Tay is an artificial intelligence chatbot launched by Microsoft on Twitter in 2016. By imitating real human language, Tay aims to become an intelligent interactive system that can understand slang. But after just 16 hours of real-person interaction, Tay was forced to go offline. "Hitler was right to hate Jews," it tweeted.

Case 5: Select candidates. Amazon wanted AI to help it automatically select candidates to fill job openings, but the results of the algorithm were sexist and racist, favoring white, male candidates.

Case 6: Mistaken identity. A Scottish football team streamed a match online during the coronavirus lockdown, using AI-powered cameras to track the ball. But this AI shooting system constantly regarded the linesman's bald head as a football, and the focus of the shooting was always on the linesman, not the game.

Case 7: Application rejected. In 2016, a mother applied for her son to move into the apartment where she lived after waking up from a coma for half a year, but was rejected by the housing center. It was a year after the son was sent to a rehabilitation center that the reason was revealed through a lawyer. The artificial intelligence used by the housing center believed that the son had a record of theft, so he was blacklisted from housing. But in fact, the son has been bedridden and unable to commit the crime.

There are many similar examples, and the reasons are nothing more than two. One is design failure caused by unexpected deviations, and the other is learning failure. The case of self-driving cars is one of learning failure. Although errors can be corrected as the number of learning times increases, until they are corrected, there may be a heavy price to pay once put into use. But if you want to completely avoid risks, it means that it will never be put into use.

Cases 3 and 5 are design failures, and unexpected deviations distorted the results. The question is whether developers can remove their biases without knowing they have them.

Misuse and Abuse of Artificial Intelligence

Misuse means that the application effect of artificial intelligence is not what the developer intended. Abuse means doing something intentionally, such as polluting the data fed to an artificial intelligence. Generally speaking, misuse usually results from the actions of the owner of an AI product, while abuse often involves the actions of third parties (such as cybercriminals), resulting in the product being manipulated in ways that were not intended by the owner. Let’s look at misuse first.

Misuse

Kazerounian, head of research at Vectra AI, believes that when human-developed algorithms try to make judgments about other people, hidden biases are inevitable. When it comes to credit applications and rental applications, for example, the United States has a long history of redlining and racism, and these discriminatory policies long predate AI-based automation.

Moreover, when biases are embedded deep into artificial intelligence algorithms, they are more difficult to detect and understand than human biases. "You may be able to see the classification results of matrix operations in deep learning models. But people can only explain the mechanism of the operation, but not the why. It only explains the mechanism. I think, at a higher level, What we must ask is, are some things suitable to be left to artificial intelligence?"

On May 11, 2022, a study by MIT and Harvard University was published in "The Lancet" , confirming that people cannot understand how deep learning reaches its conclusions. The study found that artificial intelligence was able to identify race by relying solely on medical images, such as X-rays and CT scans, but no one knew how the AI did it. If we think about it further, AI medical systems may be able to do much more than we imagine when it comes to determining a patient's race, ethnicity, gender, and even whether they are incarcerated.

Anthony Selly, associate professor of medicine at Harvard Medical School and one of the authors, commented, “Just because you have representation of different groups in your algorithm (the quality and validity of the data), that doesn’t guarantee that it will always be that way. , and there is no guarantee that it will amplify existing disparities and inequalities. Using representation learning to feed algorithms more data is not a panacea. This paper should make us pause and really reconsider whether we are ready to apply artificial intelligence to Clinical diagnosis."

This problem has also spread to the field of network security. On April 22, 2022, Microsoft added a feature called "Leaver Classifier" to its product roadmap. The product is expected to be available in September 2022. "The leaver classifier can early detect employees who intend to leave the organization to reduce the risk of intentional or unintentional data leakage due to employee departure."

When some media try to use artificial intelligence and personal privacy as the theme When interviewing Microsoft, I got this answer: "Microsoft has nothing to share at the moment, but if there is new news we will let you know in time."

In terms of ethics, what must be considered is the use of AI To speculate on the intention to resign and whether it is the correct use of technology. At least most people believe that monitoring communications to determine if someone is considering leaving their job is the right or appropriate thing to do, especially if the consequences could be negative.

Moreover, unexpected biases in algorithms are difficult to avoid and even harder to detect. Since it is difficult for even humans to effectively judge personal motivations when predicting whether someone will leave their job, why won’t artificial intelligence systems make mistakes? Moreover, people communicate at work in various ways of speaking, making assumptions, joking, getting angry, or talking about other people. Even if you go to a recruitment website to update your resume, it may just be a passing thought in your mind. Once employees are determined by machine learning to be highly likely to leave, they are likely to be the first to be laid off during an economic downturn, and will not be eligible for salary increases or promotions.

There is a broader possibility. If businesses can have this technology, law enforcement and intelligence agencies will too. The same error of judgment can occur, and the consequences are much more serious than a promotion or salary increase.

Abuse

Alex Polyakov, founder and CEO of Adversa.ai, is more worried about the abuse of AI by manipulating the machine learning process. “Research conducted by scientists and real-world assessment work by our AI red teams [those who act as attackers] have proven that fooling AI decisions, whether it’s computer vision or natural language processing or anything else, , it is enough to modify a very small set of inputs."

For example, the words "eats shoots and leaves" can represent vegetarians or terrorists by simply adding different punctuation marks. . For artificial intelligence, it is almost an impossible task to exhaust the meaning of all words in all contexts.

In addition, Polyakov has twice proven how easy it is to fool facial recognition systems. The first time the artificial intelligence system was made to believe that all the people in front of it were Elon Musk. The second example was to use an image of a human that obviously looked like the same person, but was interpreted by the artificial intelligence as multiple different people. The principles involved, that of manipulating the learning process of artificial intelligence, can be applied by cybercriminals to almost any artificial intelligence tool.

In the final analysis, artificial intelligence is just machine intelligence taught by humans. We are still many years away from true artificial intelligence, even if we do not discuss whether true artificial intelligence can be achieved. For now, AI should be viewed as a tool for automating many regular human tasks, with similar success and failure rates to humans. Of course, it's much faster and much less expensive than an expensive team of analysts.

Finally, whether it is algorithm bias or AI misuse, all users of artificial intelligence should consider this issue: at least at this stage, we cannot rely too much on the output of artificial intelligence.

The above is the detailed content of Is artificial intelligence (AI) trustworthy?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
AI ai
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!