2023 has become the official beginning of the artificial intelligence era, and almost everyone is talking about ChatGPT.
## Generative artificial intelligence language models like ChatGPT have attracted our attention and interest since we first Being able to watch artificial intelligence converse with us like a real person and generate articles, poems, and other new content that we find creative. Generative AI solutions appear to be filled with breakthrough potential for faster and better innovation, productivity and value realization. However, their limitations have not been widely noted, nor are their data privacy and data management best practices widely understood.
Recently, many in the tech and security communities have sounded the alarm due to the lack of understanding and adequate regulation of the use of artificial intelligence technology. We are already seeing concerns about the reliability of AI tool output, leakage of IP (intellectual property) and sensitive data, and violations of privacy and security.
Samsung’s incident with ChatGPT made headlines after the tech giant inadvertently leaked its secrets to artificial intelligence. Samsung isn't alone: A Cyberhaven study found that 4% of employees put sensitive corporate data into large language models. Many people don’t know that when they train a model on company data, the AI company may be able to reuse that data elsewhere.
Cybersecurity intelligence company Recorded Future revealed: “Within days of ChatGPT’s release, we discovered a number of threat actors on the dark web and special access forums who were sharing flawed but Powerful malware, social engineering tutorials, money-making schemes, and more, all made possible by using ChatGPT."
In terms of privacy, when an individual signs up for a tool like ChatGPT , which has access to IP addresses, browser settings and browsing behavior like today's search engines. But the stakes are higher because "it could reveal political beliefs or sexual orientation without the individual's consent and could mean embarrassing or even career-destroying information being released," said Jose Blaya, director of engineering at Private Internet Access.
Clearly, we need better regulations and standards to implement these new AI technologies. However, there is a lack of discussion around the important role of data governance and data management – but this plays a key role in enterprise adoption and safe use of AI.
Here are the three areas we should focus on:
Data governance and training data transparency: A core issue surrounds proprietary pre-trained AI models or large language models (LLMs). Machine learning programs using LLM contain large data sets from many different sources. The problem is, LLM is a black box that provides little transparency into the source data. We do not know whether these sources contain fraudulent data, contain PII (personally identifiable information), are trustworthy, unbiased, accurate or legal. LLM R&D does not share its source data.
The Washington Post analyzed Google's C4 data set across 15 million websites and found dozens of objectionable sites containing inflammatory and PII data and Other suspicious content. We need data governance, which requires transparency into the data sources used and the validity/trustworthiness of the knowledge contained in those sources. For example, your AI bot might be being trained on data from unverified sources or fake news sites, biasing its knowledge that is now part of your company’s new policies or R&D initiatives.
Data Isolation and Data Domains: Currently, different AI vendors have different privacy policies on how to handle the data you provide. Unintentionally, employees may provide data to the LLM in their prompts, not knowing that the model may incorporate the data into its knowledge base. Companies may unknowingly expose trade secrets, software code, and personal data to the world.
Some AI solutions offer workarounds, such as using APIs, to protect data privacy by excluding your data from pre-trained models, but this also limits the functional value of AI. Because the ideal use case is to augment a pre-trained model with your case-specific data while maintaining data privacy.
One solution is to have pre-trained AI tools understand the concept of a data “domain.” "Common" domains of training data are used for pre-training and shared among common applications, while training models based on "proprietary data" are securely restricted within the boundaries of the organization. Data management ensures that these boundaries are created and preserved.
Derivatives of Artificial Intelligence: The third area of data management involves the data produced by artificial intelligence processes and their ultimate owners. For example, use an AI bot to solve coding problems. If something was done incorrectly, resulting in a bug or bug, usually we'll know who did what to investigate and fix it. But with AI, it’s difficult for organizations to define who is responsible for any errors or bad results resulting from tasks performed by AI—you can’t blame the machine: to some extent, it’s the human being who caused the error or bad results.
The more complicated question is IP. Do you own the IP of the works created using generative artificial intelligence tools? How would you defend yourself in court? According to the Harvard Business Review, the art world has begun filing claims against certain AI applications.
In the early days, we did not know the role of artificial intelligence in bad data, privacy and security, intellectual property and others What’s not known about the risks of sensitive data sets. Artificial intelligence is also a broad field with multiple approaches such as LLM, automation based on business process logic, these are just some of the topics explored through the combination of data governance policies and data management practices:
Pause experimentation with generative AI until you have an oversight strategy, policy, and procedures for mitigating risks and validating results.
Incorporating data management guiding principles starts with having a solid understanding of your data, no matter where it resides. Where is your sensitive PII and customer data? How much IP data do you have and where are these files located? Can you monitor usage to ensure these types of data are not inadvertently fed into AI tools and prevent security or privacy breaches?
Don’t provide more data to AI applications than required, and don’t share any sensitive proprietary data. Lock/encrypt IP and customer data to prevent it from being shared.
Understand how and whether AI tools can be transparent to data sources.
Can the provider protect your data? Google shared the announcement on its blog, but the "how" is unclear: "Whether a company is training a model in Vertex AI or building a customer service experience on Generative AI App Builder, the private data remains private, Will not be used in the broader base model training corpus." Read the contract language of each AI tool to find out whether any data you provide to it can be kept confidential.
Data that tags the owner, the individual or department that commissioned the project as a derivative work. This is helpful because you may ultimately be responsible for any work your company produces, and you want to know how AI is integrated into the process and who is involved.
Ensure data portability between domains. For example, a team may want to strip data of its IP and identifying features and feed it into a common training dataset for future use. Automation and tracking of this process is critical.
Stay informed of any industry regulations and guidance that are being developed, and talk to peers in other organizations to understand how they are approaching risk mitigation and data management.
Before starting any generative AI project, consult a legal expert to understand the risks and processes in the event of a data breach, privacy and IP violations, malicious actors, or false/erroneous results .
Artificial intelligence is developing rapidly and holds great promise, with the potential to accelerate innovation, cut costs and improve user experience at an unprecedented rate. But like most powerful tools, AI needs to be used with caution and in the right context, with appropriate data governance and data management guardrails in place. No clear standards have yet emerged for data management for artificial intelligence, and this is an area that needs further exploration. At the same time, enterprises should exercise caution and ensure they have a clear understanding of data exposure, data breaches and potential data security risks before using AI applications.
The above is the detailed content of Data management is key to the healthy development of generative artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!