Home > Technology peripherals > AI > body text

What impact does ChatGPT have on data center network security?

WBOY
Release: 2023-04-12 10:19:02
forward
940 people have browsed it

What impact does ChatGPT have on data center network security?

Recently, OpenAI released the chatbot ChatGPT, which has attracted widespread attention due to its powerful capabilities. Many people even think that the world has begun to change.

If students could have ChatGPT write their papers for them, college students would be scrambling to figure out how to use this AI tool to complete their studies. Because, we have seen ChatGPT pass law school exams, business school exams, and even medical licensing exams. Employees around the world started using it to write emails, reports, and even write computer code.

However, currently ChatGPT is not perfect, and its database is not up to date, but it is more powerful than any artificial intelligence system that ordinary people have come into contact with before, and the user experience is more friendly than the artificial intelligence of enterprise-level systems. .

It seems that once a large language model like ChatGPT gets big enough and has enough training data, enough parameters, and enough layers in its neural network, strange things start to happen. It develops "features" not apparent or possible in smaller models. In other words, it starts acting as if it has common sense and an understanding of the world—or at least something approximating those things.

Through recent news, we can see major technology companies rushing to respond. Microsoft invested US$10 billion in OpenAI and added the ChatGPT function to Bing, making the search engine that had been dormant for a long time become a topic again.

Google has also announced its own plans for AI chatbots and invested in OpenAI competitor Anthropic, which was founded by former OpenAI employees and has its own chatbot Claude.

Amazon announced plans to build its own ChatGPT competitor and announced a partnership with Hugging Face, another artificial intelligence startup. Facebook’s Meta will also quickly follow its own AI work plan.

In the meantime, luckily, security experts have access to this new technology, too. They can use it for research, help writing emails and reports, help writing code, and many more ways we'll dig into.

But disturbingly, bad actors are also using it for all of these things, as well as phishing and social engineering. They also use ChatGPT to help them create deepfakes at a scale and fidelity unimaginable just a few months ago. ChatGPT itself can also be a security threat.

Next, let’s discuss the security topics these AI tools may bring to the data center, starting with the ways that malicious actors may use, and in some cases, they are already using ChatGPT. We then explore the benefits and dangers of using artificial intelligence tools like ChatGPT for cybersecurity professionals.

How Bad Guys Use ChatGPT

There is no doubt that malicious actors are already using ChatGPT. So how can ChatGPT be used to help spur cyberattacks. In a BlackBerry IT Leaders Survey released in February, 53% of respondents said it would help hackers create more believable phishing emails, and 49% noted its ability to help hackers improve their coding skills. .

Another finding from the survey: 49% of IT and cybersecurity decision-makers said ChatGPT would be used to spread misinformation and disinformation, and 48% believed it could be used to create entirely new malware. Another 46% of respondents said ChatGPT could help improve existing attacks.

Dion Hinchcliffe, vice president and principal analyst at Constellation Research, said: "We are seeing coders and even non-coders using ChatGPT to generate vulnerabilities that can be effectively exploited."

After all, artificial intelligence models Have read everything published publicly. That includes "every research vulnerability report," Hinchcliffe said, "and every forum discussion from every security expert. It's like a super brain that can break the system in all kinds of ways."

It's a scary one prospect.

And, of course, attackers could also use it to write, he added. "We're going to be inundated with misinformation and phishing content from all over." Security Alliance CEO Jim Reavis said he has seen some incredible viral experiments with AI tools over the past few weeks.

"You can see it writing a lot of code for security orchestration, automation and response tools, DevSecOps and general cloud container hygiene," he said. "ChatGPT generates a large number of security and privacy policies. Perhaps, most notably, we conduct a lot of testing in order to create high-quality phishing emails, hoping to make our defenses more resilient in this regard."

In addition, Reavis said, several major cybersecurity vendors already have or will soon have similar technology in their engines, trained on specific rules.

"We have seen tools with natural language interface capabilities before, but there has not been a widely open, customer-facing interface for ChatGPT," he added. "I hope to see commercial solutions that interface with ChatGPT soon, but I think the sweet spot right now is system integration of multiple network security tools with ChatGPT and DIY security automation in the public cloud."

Overall, he said, ChatGP and its peers hold great promise in helping data center cybersecurity teams operate more efficiently, scale constrained resources and identify new threats and attacks.

“Over time, nearly every cybersecurity feature will be enhanced with machine learning,” Reavis said. "Additionally, we know that malicious actors are using tools like ChatGPT and assume you will need to leverage a tool to combat malicious AI."

How Email Security Vendors Use ChatGPT

For example, email security vendor Mimecast is already using large language models to generate synthetic emails to train its own phishing detection AI tools.

“We typically use real emails to train our models,” said Jose Lopez, chief data scientist and machine learning engineer at Mimecast.

Creating synthetic data for training sets is one of the main advantages of large language models such as ChatGPT. "Now we can use this large language model to generate more emails," Lopez said.

However, he declined to disclose which specific large language model Mimecast uses. He said the information was the company's "secret weapon."

However, Mimecast does not currently intend to detect whether incoming emails were generated by ChatGPT. That's because it's not just bad actors using ChatGPT. AI is an incredibly useful productivity tool that many employees are using to improve their own, perfectly legal communications.

For example, Lopez, who is Spanish himself, is now using ChatGPT instead of a grammar checker to improve his writing.

Lopez also uses ChatGPT to help write code - something many security professionals are probably doing.

“In my day job, I use ChatGPT every day because it’s really useful for programming,” Lopez said. "Sometimes it's wrong, but it's right often enough to open your mind to other methods. I don't think ChatGPT will turn incompetent people into super hackers."

AI-DRIVEN SECURITY The Rise of Tools

OpenAI has begun working on improving the accuracy of the system. Microsoft gives it access to the latest information on the web through Bing Chat.

Lopez added that the next version will be a huge leap in quality. Additionally, an open source version of ChatGP is coming soon.

"In the near future we will be able to fine-tune models for specific things," he said. “Now you don’t just have a hammer — you have a whole suite of tools.”

For example, businesses can fine-tune models to monitor relevant activity on social networks and look for potential threats. Only time will tell if the results are better than current methods.

It’s also becoming easier and cheaper to add ChatGPT to existing software; on March 1, OpenAI released an API for developers to access ChatGPT and the speech-to-text model Whisper.

Generally speaking, enterprises are rapidly adopting AI-driven security tools to combat rapidly evolving threats at a larger scale than ever before.

According to the latest Mimecast survey, 92% of enterprises are already using or planning to use artificial intelligence and machine learning to strengthen their cybersecurity.

In particular, 50% believe that using it can detect threats more accurately, 49% believe that it can improve the ability to block threats, and 48% believe that it can remediate attacks more quickly when they occur. .

81% of respondents said AI systems that provide real-time, contextual alerts to users of email and collaboration tools would be a huge boon.

The report stated: "12% even said that the benefits of such a system will completely change the way cybersecurity is implemented."

Ketaki Borade, senior analyst of Omdia's cybersecurity practice, said that like AI tools like ChatGPT can also help close the cybersecurity skills shortage gap. "If hints are provided correctly, using such tools can speed up simpler tasks, and limited resources can be focused on more time-sensitive and high-priority problems."

"These large language models are a This is a fundamental paradigm shift," said Yale Fox, IEEE member and founder and CEO of Applied Science Group. "The only way to counter malicious AI-driven attacks is to use AI in defense. Data center security managers need to upskill existing cybersecurity resources and find new personnel who specialize in artificial intelligence."

In Data Center Dangers of Using ChatGPT

As mentioned earlier, AI tools like ChatGPT and Copilot can help security professionals write code, making them more efficient. However, according to recent research from Cornell University, programmers who use an AI assistant are more likely to create unsafe code while viewing it as more secure than those who don't.

This is just the tip of the iceberg when it comes to the potential drawbacks of using ChatGPT without considering the risks.

There are several well-publicized instances of ChatGPT or Bing Chat that are very confident in providing incorrect information, making up statistics and quotes, or providing completely wrong explanations of specific concepts.

People who trust it blindly may end up in a very bad situation.

"If you use a script developed by ChatGPT to perform maintenance on 10,000 virtual machines and the script has errors, you will have major problems," said Reavis of the Cloud Security Alliance.

Data Breach Risk

Another potential risk for data center security professionals using ChatGPT is data breach.

The reason OpenAI makes ChatGPT free is that it can learn from interactions with users. So, for example, if you ask ChatGPT to analyze the security posture of your data center and identify weak points, you have now taught ChatGPT all of your security vulnerabilities.

Now, consider a February survey by work-oriented social network Fishbowl, which found that 43% of professionals use ChatGPT or similar tools at work, up from 27% a month earlier . If they did, 70% would not tell their boss. Therefore, the potential security risk is high.

This is why JPMorgan Chase, Amazon, Verizon, Accenture and many other companies have reportedly banned their employees from using the tool.

OpenAI’s new ChatGPT API released this month will allow companies to keep their data private and choose not to use it for training, but there is no guarantee that any accidental leaks will not occur.

In the future, once an open source version of ChatGPT is available, data centers will be able to run it locally, behind their firewalls, avoiding possible exposure to outsiders.

Ethical Issues

Finally, Carm Taglienti, a distinguished engineer at Insight, said there are potential ethical risks in using ChatGPT-style technology to secure internal data centers.

"These models are very good at understanding how we communicate as humans," he said. Therefore, a ChatGPT-style tool with access to employee communications may be able to uncover intent and subtext that indicates a potential threat.

"We're trying to prevent network hacks and internal environment hacks. A lot of breaches happen because people walk out the door with stuff," he said.

He added that something like ChatGPT "could be very valuable to an organization." "But now we're entering this ethical space where people are going to profile me and monitor everything I do."

This is a Minority Report -style future that data centers may not be ready for .

The above is the detailed content of What impact does ChatGPT have on data center network security?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!