Home > Technology peripherals > AI > OpenAI company releases security measures used when building artificial intelligence models such as GPT-4

OpenAI company releases security measures used when building artificial intelligence models such as GPT-4

WBOY
Release: 2023-04-13 20:52:10
forward
797 people have browsed it

OpenAI company releases security measures used when building artificial intelligence models such as GPT-4

OpenAI company recently announced that it will build security measures into its artificial intelligence tools. The company is the developer of the powerful GPT-4 large-scale linguistic artificial intelligence model, which is at the core of its ChatGPT chatbot and other artificial intelligence systems. The company's disclosure comes amid growing calls for more controls over the development of generative AI systems.

In a blog post, OpenAI detailed the steps it is taking to prevent its artificial intelligence systems from generating harmful content and avoid violating data privacy regulations. While the AI ​​tools developed by the company have led a global boom in generative AI, in recent weeks regulators have become interested in the safety of such AI systems, with Italy citing potential violations of GDPR regulations. Use of ChatGPT is prohibited.

Artificial intelligence experts, including Musk and Apple co-founder Wozniak, recently called for a temporary halt to the development of large language models (LLM) in an open letter, and U.S. President Biden also joined Following this discussion, he told reporters that artificial intelligence companies must put safety first.

How OpenAI built GPT-4 with security in mind

OpenAI said it spent six months before releasing its most advanced GPT-4 to date last month. Months to perfect the system to make it as difficult as possible to use for nefarious purposes.

Security researchers have previously demonstrated that it is possible to bypass ChatGPT's security controls by "tricking" chatbots into generating hate speech or malware code that mimics poor artificial intelligence systems. OpenAI says this is less likely to happen with GPT-4 than with the GPT-3.5 model.

The company’s engineers said: “Compared to GPT-3.5, GPT-4 is 82% less likely to respond to requests for disallowed content, and we have built a robust system to monitor Abuse of artificial intelligence systems. We are also developing features that will enable developers to set more stringent standards for model output to better support developers and users who require such capabilities."

OpenAI Company Will AI systems leak internet users' personal data? The company also took the opportunity to respond to data regulators' concerns about the way its models collect data from the internet for training purposes. After Italy banned ChatGPT, Canada launched an investigation into the chatbot's safety and other countries in Europe are considering whether to follow Italy's lead.

OpenAI said: “Our large language models are trained on an extensive corpus of text, including publicly available content, licensed content, and content generated by human reviewers. We do not transfer the data to For selling services, advertising or building user profiles."

The company also detailed how it ensures that personal data is not leaked during training. "While some of our training data includes personal information available on the public internet, we want our large-scale language models to learn about the world, not individuals," the company said. remove personal information from the training data set, fine-tune the model to deny requests for personal information, and respond to individuals’ requests to remove personal information from the AI ​​system. These steps minimize the chance that our models will produce responses that contain personal information possibility."

U.S. President Biden joins the ranks of opponents of artificial intelligence

Before OpenAI issued this statement, U.S. President Biden told reporters that artificial intelligence development Authors are responsible for ensuring that their products are safe before making them public.

Biden spoke after discussing the development of artificial intelligence with his Science and Technology Advisory Council. He said that the U.S. government is committed to advancing the Artificial Intelligence Bill of Rights introduced in October last year to protect individuals from the negative impacts of advanced automated systems.

Biden said: “We proposed a bill of rights last October to ensure that important protections are built into AI systems from the beginning so we don’t have to go back and do them. I We look forward to our discussions ensuring responsible innovation and appropriate guardrails to protect the rights and safety of Americans, protect their privacy, and address possible bias and disinformation."

The above is the detailed content of OpenAI company releases security measures used when building artificial intelligence models such as GPT-4. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template