Home > Technology peripherals > AI > An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and Chinese

An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and Chinese

WBOY
Release: 2024-07-15 18:44:12
Original
718 people have browsed it
An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and Chinese
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

The authors of this article are Xu Rongwu, a second-year master's student and Qi Zehan, a first-year doctoral student in the School of Interdisciplinary Information at Tsinghua University. They are also the main authors of this review .

With the rapid development of artificial intelligence and large-scale model technology, Retrieval-Augmented Generation (RAG) has become a major paradigm for large-scale language models to generate text. The representative of this technology - Retrieval-Augmented Large Language Model (RALM) - can directly use the retrieved document information to generate content without additional training. This advantage makes it very popular in the industry. It has been widely used in the world, such as New Bing search engine.

However, since 2023, the problems RALM faces in handling knowledge conflicts have gradually become the focus of research. Knowledge conflicts not only seriously affect the performance of the model on knowledge-intensive tasks, but also expose its vulnerability to misinformation, thereby posing a threat to the security of the model, especially in those companies that have strict requirements for factual accuracy. application scenarios. Knowledge conflicts are mainly manifested in the inconsistency between the parameterized knowledge inside the model and the external context information, as well as the internal inconsistency of the external context information. In addition, the researchers also observed conflicts between parameterized knowledge within the model, that is, self-contradictory phenomena. This may be due to the fact that the model learned conflicting information during the pre-training stage.

Let’s look at a specific example:

An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and Chinese

In the above example, the big model accepts a factual question: Which team has won the most championships in the World Cup? For this problem, a RALM may retrieve documents on the Internet and in a vector database, and at the same time add historical conversation records composed of the user's previous prompts, which together form contextual knowledge (Context, marked in yellow in the above figure). (out); at the same time, the large model also saw information about answering this question during pre-training. This information constitutes its parametric knowledge, also known as the model’s “memory” (Parametric Knowledge, Memory, shown in blue in the above figure) color marked). According to the source of the information of the two conflicting parties, we can "pairwise (re)combination" divide it into the following three categories:

  • Context-Memory Conflict is the conflict between context and parameter knowledge. Example 1: The knowledge acquired by the model through Web retrieval is instant, but the learned knowledge is "outdated"; Example 2: The model obtains wrong false information, which conflicts with parameter knowledge.

  • Inter-Context Conflict is the conflict within context knowledge. Example: Through web search, the information obtained is conflicting because it was published at different times, or mixed with malicious misinformation.

  • Intra-Memory Conflict is the conflict within parameterized knowledge. Example: For factual questions and answers, the model is stimulated to give answers with different results under the same semantic prompt, producing contradictory effects.

The earliest literature on knowledge conflicts can be traced back to the article by Longpre et al. in EMNLP 2021: Entity-based knowledge conflicts in question answering. This article constructs conflicting knowledge in Open-Domain Question Answering through the method of named entity replacement, and evaluates the language model at the time. With the rise of large-scale language models in 2023 and the widespread application of the RAG paradigm in the industry, the research interest in knowledge conflicts has gradually increased, because it greatly reduces the performance of the model on key tasks, especially the requirements for authenticity. task.

Recently, researchers from Tsinghua University, Cambridge University, Westlake University, and the Chinese University of Hong Kong jointly published a review to conduct a detailed discussion of three different types of knowledge conflicts from three aspects: causes, manifestations, and solutions. Help readers better understand and respond to this challenge. In our view, knowledge conflict is both a cause of the downstream performance of various models and an effect that emerges from the natural complexity of knowledge itself and model knowledge learning.

An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and Chinese

  • Paper address: https://arxiv.org/abs/2403.08319

  • Project address: https://github.com/pillowsofwind/Knowledge-Conflicts-Survey

This review:

1. The first systematic summary of research work in the field of knowledge conflicts;

2. Comprehensive analysis of the types of conflicts that three large models may encounter, especially discussion of parameterized knowledge conflicts;

3. We not only discussed the analysis of each conflict, but also examined it from the perspective of its "life cycle" Causes, manifestations, and possible resolution strategies of conflict.

An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and Chinese

Exploring Context-memory conflict: causes, manifestations and solutions

An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and ChineseCauses

The core of Context-Memory Conflict lies in the difference between contextual information and parameterized knowledge. The causes of this conflict are mainly divided into two aspects: Temporal Misalignment and Misinformation Pollution.

1. Temporal Misalignment

Temporal misalignment means that the historical data used by the model during the training process cannot accurately reflect the current or future reality. This phenomenon is especially evident in large language models, as these models are often pre-trained on large amounts of static data that may be outdated in reality. For example, an article about the 2020 Olympics may no longer be accurate in 2024, however, the model may still rely on this outdated information to make predictions and answer questions. Research shows that the performance of language models will decline over time. The way language is used, cultural changes, and knowledge updates will all affect the model's ability to process current information.

2. Misinformation Pollution

Information pollution refers to external information mixed with wrong or misleading information. These inaccurate data will affect the judgment and decision-making ability of the model. This situation is particularly common in the Internet age, where the Internet is filled with all kinds of false information, rumors, and deliberately fabricated fake news. Malicious users may interfere with the model's judgment by publishing false information on the Internet. For example, a malicious attacker could post false medical information on social media to mislead models that rely on this information to make judgments. Information pollution not only affects the accuracy of the model, but also undermines users' trust in the model. Research shows that malicious disinformation can significantly weaken the accuracy of automated fact-checking systems and open-domain question-answering systems.

Performance

The behavior of the model shows significant complexity and diversity when faced with Context-Memory Conflict. The following are two forms of expression:

1. Reliance on parameterized knowledge

Some models tend to over-rely on their internal parameter knowledge when dealing with conflicts between context and memory, while ignoring externally provided context information. This behavior was demonstrated in early open domain question answering (ODQA) research. Longpre et al. (2021) found that QA models tend to rely on memory knowledge when faced with conflicts between contextual information and their internal knowledge.

2. Reliance on contextual information

On the other hand, some models tend to accept external evidence when faced with it, even if the evidence contradicts their internal memory. Chen et al.'s (2022) experiments on a QA model showed that the model tends to rely on contextual knowledge, in contrast to the findings of Longpre et al., which was explained by Longpre constructing conflicting information too simplistically. Recently, Xie et al. (2023) manipulated large models to generate "more logical" conflict contexts and found that large models were more inclined to trust external evidence when faced with it, even if the evidence contradicted their parameter knowledge.

Solutions

In order to effectively deal with Context-Memory Conflict, researchers have proposed a variety of solutions, which are mainly divided into preventive measures before the conflict occurs (pre-hoc strategies) and response measures after the conflict occurs. (post-hoc strategies). The following are several main solutions:

1. Preventive measures

  • Continue Learning: Reduce the impact of time misalignment by continuously pre-training the model to incorporate new and updated data. For example, Lazaridou et al. (2021) recommend updating the model's internal knowledge through continuous pre-training to keep up with the latest information.

  • Knowledge Editing: Directly update the parameter knowledge of the trained model to reflect the latest information. For example, De Cao et al. (2021) proposed a knowledge editing method that aims to directly modify the internal knowledge of the model to correct erroneous or outdated information. However, one drawback of knowledge editing is that it may cause internal conflicts in the model, that is, inducing the intra-memory conflict we mentioned later.

2. Countermeasures

  • Fine-Tuning: By introducing methods such as counterfactuals and irrelevant context, the model’s ability to control context and robustness are enhanced. For example, the knowledge-aware fine-tuning (KAFT) method proposed by Li et al. (2022) enhances the model's robustness in the face of conflicting information by introducing counterfactuals and irrelevant context in standard training datasets.

  • Prompting technology (Prompting): Enhance the model's dependence on context through specially designed prompting strategies. For example, Zhou et al. (2023) proposed a concise context-faithful prompting technique, which significantly improved the model's performance in context-sensitive tasks.

  • Knowledge Plug-in: Store updated knowledge through plug-in modules to ensure that the original model is not affected. For example, the continuous update QA (CuQA) method proposed by Lee et al. (2022) enhances the knowledge update capability of the model through knowledge plug-ins without affecting its original parameters.

  • Decoding technology (Decoding): By adjusting the decoding strategy, the probability of the model generating hallucinations in the case of knowledge conflicts is reduced. For example, the context-aware decoding (CAD) method proposed by Shi et al. (2023) prioritizes contextual information by amplifying the difference in output probabilities, thereby reducing the model's misleading under conflicting information.

By combining these preventive and countermeasures, the accuracy and robustness of the model in handling Context-Memory Conflict can be improved from different angles, thereby improving the model's performance and user experience in practical applications.

Explore Inter-Context Conflict: causes, manifestations and solutions

An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and Chinese

Causes

Inter-Context conflict refers to the contradiction that occurs during the integration of different external information, although these external information can enrich the world model's answer, but may also lead to information conflicts between contexts. This kind of conflict occurs mainly because external information may contain misinformation (Misinformation) and outdated information (Outdated Information).

1. Misinformation

Retrieval Augmentation Generation (RAG) technology improves the response quality of large models by integrating external information. However, these external information may contain false content. For example, fake news or misleading content generated by AI may be mixed in, causing conflicts between the retrieved information. How the model handles these conflicts is an important challenge. Failure to effectively resolve these conflicts may lead to inaccurate content generated by the model, thereby exacerbating the spread of false information and further confusing information.

2. Outdated Information

As time goes by, facts will change. When retrieving external files, large models may encounter documents that contain both current and outdated information. This temporal difference in information can lead to conflicts between contexts. For example, contradictions between the latest developments and outdated information about an event can affect the accuracy of a model's response. Outdated information not only makes the model’s answers inaccurate, it can also cause users to lose trust in the model.

Performance

When faced with Inter-Context Conflict, large models show specific behavioral characteristics from both passive and active perspectives:

1. Performance Impact

Error or obsolescence Information can significantly affect the performance of large models. For example, research by Chen et al. (2022) also pointed out that when models encounter conflicting information, they are more likely to trust information directly related to the problem and parameter knowledge within the model. Pan et al. (2023a) found that existing language models performed poorly in the face of disinformation attacks by inserting fake Wikipedia articles into the real Wikipedia corpus. Research by Xie et al. (2023) further revealed that large models have a significant preference for evidence consistent with model parameter memory, especially when these evidences involve common entities or are supported by extensive documentation.

2. Detection Ability

Detecting contradictory information in context is also an important task. Li et al. (2023a) analyzed the ability of GPT-4, PaLM-2, and Llama 2 to detect contradictory documents in news, stories, and Wikipedia articles, and the results showed a low average detection accuracy. Research by Wan et al. (2024) revealed that existing models often rely heavily on query-related document content when evaluating document credibility, but ignore stylistic features that humans consider important, such as scientific citations or neutral tone. Jin et al. (2024a) found that large models favor evidence that appears most frequently in context and show a clear preference for external information that is consistent with their internal memory.

Solution

In order to effectively deal with Inter-Context Conflict, researchers have proposed solutions from various perspectives. These solutions are mainly divided into two aspects: Eliminating Conflict and Improving Robustness , solving Inter-Context Conflict from both active and passive perspectives.

1. Eliminating Conflict

  • Specialized Models: Specifically train a model to better handle specific types of conflicts. For example, Pielka et al. (2022) suggested adding linguistic knowledge to the learning process and enhancing the recognition of contradictory information by introducing grammatical and semantic features to improve the model's ability to detect contradictions.

  • General Models: Use general models to complete conflict elimination. Chern et al. (2023) proposed a fact-checking framework that integrates multiple tools (such as Google Search, Google Scholar, etc.) to detect factual errors in texts. This approach not only relies on the internal knowledge of the model, but also combines externally retrieved information to provide a more comprehensive verification of facts.

2. Improving Robustness

  • Training Approach: Improve the robustness of the model when facing conflicting contexts from the training algorithm. Hong et al. (2023) proposed a new fine-tuning method to improve the robustness of the model by training the discriminator and decoder simultaneously. This method can not only improve the stability of the model in the face of conflicting information, but also enhance its ability to handle complex information.

  • Query Augmentation: Improve the robustness of the model by further introducing external knowledge during the inference phase. Weller et al. (2022) proposed a query enhancement technique that prompts GPT-3 to extract new questions from the original query. By generating multiple queries related to the original question, the model can verify the correctness of the answer from multiple perspectives, thus Reduce errors due to a single source of information. This approach not only improves the model's ability to respond to conflicting information, but also increases the accuracy and reliability of its answers.

Inter-Context Conflict is an important part of knowledge conflict. How large models handle conflicting information is a critical task. Through the above methods, the accuracy and robustness of the model when dealing with Inter-Context Conflict can be improved from different angles.

Explore Intra-Memory Conflict: causes, manifestations and solutions

An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and Chinese

Cause

Intra-Memory Conflict refers to the model showing inconsistent behavior when facing inputs with the same semantics but different syntax . The main reasons for this conflict can be divided into the following aspects:

1. Bias in Training Corpora

The main knowledge acquisition phase of LLMs is completed during pre-training, and these pre-training data usually It was scraped from the internet. This data comes from a wide range of sources, including social media, news articles, encyclopedias, etc., and its quality varies and may contain inaccurate or misleading information. This erroneous information is remembered by the model and amplified during inference, leading to conflicting knowledge within the model, which can lead to multiple conflicting answers when the model answers relevant questions. At the same time, large models often encode superficial correlations in training data, which causes the model to make judgments based on superficial spurious correlations. Due to its reliance on spurious correlations, models may give different answers when encountering cues with different syntactic structures but identical semantics.

2. Decoding Strategy

The output of the large model is obtained by sampling the probability distribution of the possible next words. Different sampling methods (such as greedy sampling, top-p sampling, top-k sampling, etc.) will lead to randomness in the generated content. For example, when using top-k sampling, the model will randomly select the next word from the k candidate words with the highest probability. This randomness increases the uncertainty of the output, making it possible for the same input to be obtained in different times of inference. Different results.

3. Knowledge Editing

In order to efficiently modify the knowledge in large models, researchers have proposed knowledge editing technology. These techniques aim to efficiently modify small areas of knowledge in the model without retraining the entire model. However, these editing methods may make it difficult to ensure the consistency of knowledge. For example, modifying a fact (such as the specific details of a scientific discovery) through knowledge editing, but failing to simultaneously update all knowledge related to it, may cause the model to produce inconsistent responses when faced with different problems. At the same time, the modified knowledge may not be effectively applied in different situations, causing the model to produce inconsistent answers when processing different expressions of the same knowledge.

Performance

Intra-Memory Conflict will have a significant impact on the performance of large models, mainly reflected in the following aspects:

1. Self-Inconsistency (Self-Inconsistency)

Self-inconsistency means that the answers generated by the model are inconsistent when faced with questions that are semantically equivalent but have different syntax. For example, research shows that even advanced models like GPT-4 still have inconsistencies in 13% of answers when dealing with common sense questions. This means that users asking the same question but saying it differently may get a different answer. On the other hand, when recalling knowledge, a model may rely more on superficial associations of words in the training data rather than on a true understanding of the knowledge. For example, a model might incorrectly associate certain words that co-occur frequently, causing the generated answers to deviate from expectations. This false correlation further exacerbates the self-inconsistency of the model's answers.

2. Latent Representation of Knowledge

The multi-layer Transformer architecture inside the large model results in different knowledge representations being stored at different levels. This scattered knowledge representation will cause the model to be unable to accurately express the stored knowledge during the generation process. For example, shallow levels may store low-level information, while deep levels store semantic information. This dispersion of multi-layered representations causes the model to be unable to coordinate different levels of knowledge when faced with different problems, thereby producing inconsistent answers.

3. Cross-lingual Inconsistency

Since large models maintain different knowledge sets in different languages, this leads to cross-lingual consistency problems. For example, the same fact may receive different answers in different languages. This phenomenon is especially obvious in cross-language question and answer. For example, a model trained in English might have an accurate answer for a fact, but give a different answer in Spanish.

Solution

For internal memory conflicts, researchers have proposed a variety of solutions, which can be mainly divided into the following categories:

1. Improving Consistency

  • Fine-tuning ): By introducing the consistency loss function and combining it with the standard language model training loss, fine-tuning is performed to improve the knowledge consistency of the model. For example, Li et al. (2023) used the answers generated by the model to verify it, and selected answer pairs with higher consistency for fine-tuning to further improve the consistency of the generated answers.

  • Plug-in: Improve the consistency of the model through the integration method of module insertion. For example, Jang and Lukasiewicz (2023) proposed to train the model by using word meanings in the dictionary to enhance its understanding of symbol meanings. These enhanced parameters are then merged with those of the existing language model to improve model consistency.

  • Output Ensemble: Obtain the most correct answer by synthesizing multiple outputs. Mitchell et al. (2022) proposed this dual-model architecture to select the most credible final answer and reduce inconsistencies in model generation by evaluating the logical consistency between answers.

2. Improving Factuality

Improve the authenticity of the model response, thereby reducing the occurrence of inconsistencies in the model itself. For example, Li et al. (2023) proposed a knowledge detection method that reduces factual errors in the generation process by identifying the real knowledge contained in the model parameters and adjusting activations along the directions related to these real knowledge during the inference stage.

Internal memory conflicts are an important challenge in LLMs research, and solving this problem requires starting from multiple stages such as training, generation and post-processing. While current solutions have mitigated this problem to some extent, there are still many challenges that need to be overcome.

Discussion 1: How should the model respond to conflicts?

Ideally, a model should be able to identify conflicts and provide clear answers when encountering knowledge conflicts. However, research has found that existing models perform better at identifying the presence of conflicts, but there are still challenges in identifying specific conflict passages and generating differentiated answers. On the other hand, some researchers believe that we should not leave the task of “handling conflicts” entirely to AI represented by large models, but should instead put this power in the hands of humans.

Discussion 2: Current challenges and follow-up research directions

An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and Chinese

1. Knowledge conflicts in real environments:

Research should focus on situations where knowledge conflicts naturally occur in the real world, such as directly from the network Retrieve documents in Retrieval Enhanced Language Models (RALMs). Artificially created knowledge conflicts should be minimized to better reflect practical applications.

2. More granular solutions:

More granular solutions are needed that consider the nature of user queries, sources of conflicting information, and user expectations. Solutions should be tailored to different types of conflicts (such as misinformation, outdated information, or subjective issues), recognizing the breadth of the problem and potential solutions.

3. Downstream task evaluation:

Future research should go beyond common question and answer datasets to assess the impact of knowledge conflicts on a wider range of applications. This includes areas that require high accuracy and consistency, such as legal document analysis, medical diagnostics, financial analysis, and educational tools.

4. Interactions between conflicts:

It is crucial to study the interactions between different types of conflicts, such as internal memory conflicts and contextual memory conflicts. Understanding these relationships may reveal the mechanisms of knowledge representation and processing in large models, leading to the development of more powerful models.

5. Interpretability:

Needs a more microscopic examination of the internal mechanisms of large models (such as attention heads or neuron activation during conflict). This will help understand how models make decisions when encountering conflicts and develop conflict resolution methods such as path patching and pruning.

6. Multilingualism:

Research should explore non-English cues and knowledge conflicts across languages. This includes knowledge conflicts in languages ​​other than English, as well as contextual conflicts across multiple documents in different languages.

7. Multimodality:

With the development of large models to handle multiple formats (text, image, video, audio), future research should focus on conflicts in multimodal environments. The development of advanced LLMs capable of cross-modal reasoning and conflict resolution across multiple data types is necessary.

The above is the detailed content of An in-depth analysis of knowledge conflicts in the RAG large model, jointly published by Tsinghua West Lake University in Hong Kong and Chinese. For more information, please follow other related articles on the PHP Chinese website!

source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template