Home > Technology peripherals > AI > body text

In addition to RAG, there are five ways to eliminate the illusion of large models

王林
Release: 2024-06-10 20:25:51
Original
1033 people have browsed it

Produced by 51CTO Technology Stack (WeChat ID: blog51cto)

It is well known that LLM can produce hallucinations - that is, generate incorrect, misleading or meaningless information.

Interestingly, some people, such as OpenAI CEO Sam Altman, view the imagination of AI as creativity, while others believe that imagination may help make new scientific discoveries.

In most cases, however, it is crucial to provide the correct answer, and hallucinations are not a feature, but a flaw.

So, how to reduce the illusion of LLM? Long context? RAG? Fine-tuning?

In fact, long context LLMs are not foolproof, vector search RAG is not satisfactory, and fine-tuning comes with its own challenges and limitations.

The following are some advanced techniques that can be used to reduce the LLM illusion.

1. Advanced prompts

There is indeed a lot of discussion about whether using better or more advanced prompts can solve the hallucination problem of large language models (LLM).

In addition to RAG, there are five ways to eliminate the illusion of large modelsPicture

Some people think that writing more detailed prompt words will not help solve the (hallucination) problem, but the co-founder of Google Brain (Google Brain) But people like Andrew Ng saw the potential. They proposed a new method that uses deep learning technology to generate prompt words to help people solve problems better. This method utilizes a large amount of data and powerful computing power to automatically generate prompt words related to the problem, thereby improving the efficiency of problem solving. Although the field

Ng believes that the inference capabilities of GPT-4 and other advanced models make them very good at interpreting complex prompt words with detailed instructions.

In addition to RAG, there are five ways to eliminate the illusion of large modelsPicture

“With multi-example learning, developers can give dozens, or even hundreds, of examples in a prompt word, which is better than less Learning by example is more effective,” he writes.

In addition to RAG, there are five ways to eliminate the illusion of large modelsPicture

In order to improve prompt words, many new developments are constantly emerging. For example, Anthropic released a new The “Prompt Generator” tool converts simple descriptions into high-level prompts optimized for large language models (LLMs). Through the Anthropic console, you can generate prompt words for production.

Recently, Marc Andreessen also said that with the right prompts, we can unlock the potential super genius in AI models. "Prompting techniques in different areas could unlock this potential super-genius," he added.

2.Meta AI’s Chain-of-Verification (CoVe)

Meta AI’s Chain-of-Verification (CoVe) is another technology. This approach reduces hallucinations in large language models (LLMs) by breaking down fact-checking into manageable steps, improving response accuracy, and aligning with human-driven fact-checking processes.

In addition to RAG, there are five ways to eliminate the illusion of large modelsPicture

CoVe involves generating an initial response, planning validation questions, answering those questions independently, and generating a final validated response. This approach significantly improves the accuracy of the model by systematically validating and correcting its output.

It improves performance in a variety of tasks such as list-based questions, closed-book question answering, and long-form text generation by reducing hallucinations and increasing factual correctness.

3. Knowledge Graph

RAG (Retrieval Enhanced Generation) is no longer limited to vector database matching. Many advanced RAG technologies have been introduced to significantly improve the retrieval effect.

In addition to RAG, there are five ways to eliminate the illusion of large modelsPicture

For example, integrate knowledge graphs (KGs) into RAG. By leveraging structured and interconnected data in knowledge graphs, the reasoning capabilities of current RAG systems can be greatly enhanced.

4.Raptor

Another technique is Raptor, which handles problems that span multiple documents by creating a higher level of abstraction. It is particularly useful when answering queries involving multiple document concepts.

In addition to RAG, there are five ways to eliminate the illusion of large modelsPicture

Approaches like Raptor fit well with long-context large language models (LLMs) because you can directly embed the entire document without chunking .

This method reduces the hallucination phenomenon by integrating an external retrieval mechanism with the transformer model. When a query is received, Raptor first retrieves relevant and verified information from external knowledge bases.

These retrieved data are then embedded into the context of the model along with the original query. By basing the model's responses on facts and relevant information, Raptor ensures that the content generated is both accurate and contextual.

5. Conformal Abstention

The paper "Relieving the Hallucination Phenomenon of Large Language Models through Conformal Abstention" introduces a method to determine the model by applying conformal prediction technology Methods to reduce hallucinations in large language models (LLMs) when responses should be avoided.

In addition to RAG, there are five ways to eliminate the illusion of large modelsPicture

By using self-consistency to evaluate the similarity of responses and leveraging conformal prediction for strict guarantees, this method ensures that the model only Only respond when you are confident in its accuracy.

This method effectively limits the incidence of hallucinations while maintaining a balanced withdrawal rate, which is especially beneficial for tasks that require long answers. It significantly improves the reliability of model output by avoiding erroneous or illogical responses.

6.RAG reduces the hallucination phenomenon in structured output

Recently, ServiceNow reduces the hallucination phenomenon in structured output through RAG, improves the performance of large language models (LLM), and achieves It achieves out-of-domain generalization while minimizing resource usage.

In addition to RAG, there are five ways to eliminate the illusion of large modelsPicture

The technology involves a RAG system that retrieves relevant JSON objects from an external knowledge base before generating the text. This ensures that the generation process is based on accurate and relevant data.

In addition to RAG, there are five ways to eliminate the illusion of large modelsPicture

By incorporating this pre-retrieval step, the model is less likely to produce false or fabricated information, thereby reducing the phenomenon of hallucinations. Furthermore, this approach allows the use of smaller models without sacrificing performance, making it both efficient and effective.

To learn more about AIGC, please visit:

51CTO AI.x Community

https:// www.51cto.com/aigc/

The above is the detailed content of In addition to RAG, there are five ways to eliminate the illusion of large models. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact [email protected]
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!