Home > Technology peripherals > AI > body text

Hackers use AI face-changing technology to apply for jobs. Artificial intelligence security issues cannot be ignored

PHPz
Release: 2023-04-08 14:51:14
forward
1175 people have browsed it

After the epidemic in the United States, many companies have adopted the "Work From Home" (WFH) model. The FBI's Crime Complaint Center stated that they have recently received complaints from many corporate employers that during the recruitment process, job applicants have stolen the identities of others and used Deepfake technology to participate in remote interviews.

Hackers use AI face-changing technology to apply for jobs. Artificial intelligence security issues cannot be ignored

These positions involve information technology, computer programs, databases and software-related fields. Some job seekers try to use the background and expertise of others to get jobs, and use Deepfake technology to forge videos. .

They found that when conducting online interviews with job seekers, the job seeker’s movements or the opening and closing of his lips were not consistent with his speaking voice. For example, when the sound of sneezing or coughing appeared, the screen did not Synchronize.

When they conducted background checks on these job seekers, they found that some job seekers were actually using other people's identities to find jobs. If you are just looking for a job, it is still a small problem, but if it is a hacker, once they successfully sign the contract, they will be able to successfully enter the company and gain access to confidential data.

黑客用AI换脸技术应聘 人工智能安全问题不容忽视

Are you also curious, is this software so easy to use?

The answer is, it is indeed very advanced.

Deepfake takes advantage of the powerful image generation capabilities of the Generative Adversarial Network (GAN), which can combine and superimpose any existing images and videos onto the source images and videos. It can record the details of a person's face. After years of development, Deepfake technology can now perform face-changing in real time without any sense of violation.

However, when it comes to videos, it is difficult for Deepfakes to animate facial expressions with high confidence. People in the video either never blink, or blink too frequently or unnaturally. Moreover, the audio and dummy images will not match naturally enough.

So if this kind of video lasts for 10 seconds, it will make people suspicious. The entire interview will take longer and it will be easier to expose flaws.

The progress and changes in science and technology are a double-edged sword.

Although artificial intelligence technology provides us with massive conveniences, it may also bring about a series of issues such as security, ethics, and privacy.

The essence of the development of artificial intelligence is to use algorithms, computing power and data to solve deterministic problems in complete information and structured environments. In this era of data support, artificial intelligence faces many security risks.

First of all, it may face poisoning attacks.

That is, hackers inject malicious data to reduce the reliability and accuracy of the AI ​​system, thereby leading to artificial intelligence decision-making errors. Adding fake data, malicious samples, etc. to the training data destroys the integrity of the data, which in turn leads to deviations in the decision-making of the trained algorithm model.

If this kind of operation is used in the field of autonomous driving, it is likely to cause the vehicle to violate traffic rules and even cause a traffic accident.

Secondly, there will be the problem of data leakage.

Reverse attacks can lead to data leakage within the algorithm model. Nowadays, various smart devices such as smart bracelets, smart speaker biometric identification systems, and smart medical systems are widely used, and personal information is collected in all directions. Including faces, fingerprints, voiceprints, irises, heartbeats, genes, etc., this information is unique and immutable. Once leaked or misused, it will have serious consequences.

For example, it has been exposed that a large number of face photos collected by a large number of domestic stores without the user's consent were leaked. These face photos have been more or less leaked on black products. There may be risks of fraud or financial security.

Once again, you will face network risks.

Artificial intelligence will inevitably introduce network connections. Artificial intelligence technology itself can also improve the intelligence level of network attacks, and then carry out intelligent data theft and data extortion attacks or automatically generate a large amount of false threat intelligence, which will affect the analysis system. Carry out an attack.

The main attack methods include: bypass attacks, inference attacks, backdoor attacks, model extraction attacks, attribution inference attacks, Trojan attacks, model reversal attacks, anti-watermark attacks, and reprogramming attacks.

We must clearly realize that data security in the era of artificial intelligence also faces many new challenges. Protecting data security and algorithm security has become a top priority for enterprises.

The above is the detailed content of Hackers use AI face-changing technology to apply for jobs. Artificial intelligence security issues cannot be ignored. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!