Home > Technology peripherals > AI > Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

WBOY
Release: 2023-04-09 19:31:01
forward
1658 people have browsed it

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

  • Paper address: https://arxiv.org/pdf/2206.11863.pdf
  • Dataset CHEF Dataset link: https://github.com/THU-BPM/CHEF

1. Introduction

Let’s take a look at the definition of the task first, and give a relatively simple example:

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

For example, the Shanghai blockade During this period, a certain self-media claimed that "Li Liqun was caught sneaking downstairs to buy meat." Based on this claim (Claim) alone, we actually cannot determine whether he secretly went downstairs to buy meat and was caught. In order to verify the authenticity of this statement, the most intuitive idea is to look for evidence (Evidence). Evidence is information that you can collect and can help us verify the authenticity of a statement. For example, in the picture below, I just ended up tearing it up with my hands, which can be used as evidence.

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

The statement cited above is relatively simple. It only requires simple evidence and does not need to be based on evidence. reasoning. Let's look at a relatively complex example below. For example, there is a statement: In 2019, a total of 120,800 people took the Chengdu High School Entrance Examination, but the enrollment plan was only 43,000. It is relatively difficult to verify this statement. If we find relevant documents reporting on the 2019 Chengdu High School Entrance Examination:

...A total of 120,800 people took the high school entrance examination this year , this is the total reference number of Chengdu city, including 20 districts, High-tech Zone and Tianfu New District. A few months ago, the Education Bureau announced the 2019 general high school enrollment plan. The number of enrollment plans has further increased, and the chances of getting into the general high school are even greater. ......


In 2019, the enrollment plan for the central city (13 districts) is 43,015 people.

This document contains a lot of information related to the statement, but what is directly relevant and can help us verify the statement is the second half of the second paragraph above part, and the first sentence after many paragraphs. Based on these pieces of evidence, we can know that there are indeed 120,800 people taking the high school entrance examination in the 20 districts of Chengdu, and the enrollment plan for the central urban area (only including 13 districts) is indeed only 43,000. Although the numbers are correct, the concept has been changed here. When discussing the number of people taking the high school entrance examination, the number of people in 20 districts is used, but when discussing the enrollment plan, the range of 20 districts is reduced to 13 districts, thus misleading readers. To verify this kind of statement, we often need to extract directly relevant evidence from one or more documents, and at the same time make inferences based on the extracted evidence. In order to promote Chinese fact-checking machine learning systems, we propose such an evidence-based Chinese data set.

2. Related work

According to the review of fact-checking [1], the current fact-checking data sets can be roughly divided into two categories: Artificial ( Artificial) and natural (Natural).

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

Artificial (Artificial): The annotator is asked to rewrite the sentence according to Wikipedia as a statement, and the relevant paragraphs in the document can be used as evidence Verify this statement. If it is a synonymous conversion, then the statement is supported by the evidence (Supported). If the entities in the sentence are replaced, or a series of modifications such as negation are added, then the statement is rejected by the evidence (Refuted).

This annotation paradigm was originally FEVER[2], and many later famous data sets such as TabFact[3] also followed this paradigm. The advantage of this type of artificial data set is that it can be scaled up. Annotators are asked to label 100,000 statements, which is very suitable for training neural networks. On the other hand, relevant evidence is also easy to obtain. The disadvantage is that these statements are not statements that we will encounter in daily life and are popular among the general public. For example, you would not rewrite Li Liqun's Wikipedia statement to say "He secretly went downstairs to buy meat and was caught." On the other hand, this type of dataset assumes that Wikipedia contains all the knowledge to verify the claims, which is a relatively strong assumption. This assumption is often not met in real-life scenarios. The simplest problem is that Wikipedia has a time lag.

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

Natural: It is a statement crawled directly from the fact-checking platform, foreign comparison A well-known organization is PolitiFact, which often checks what Trump says. The advantage of this type of data set is that it is a statement that the general public will encounter every day and want to know the truth. It’s also a statement that human fact-checkers need to sift through.

If we ultimately want to build a system that can replace human verifiers to a certain extent, the input of this system needs to be this type of statement. The disadvantage of this type of data set is also obvious, that is, the number of claims that have been verified by humans is very limited. As the table shows, most data sets are actually an order of magnitude smaller than those constructed manually.

On the other hand, finding evidence is a very difficult problem. Existing data sets generally use fact-checking articles [4] as evidence, or use claims to enter Google search queries [5][6], and then use the returned search summary (shown in the red box) as evidence.

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

There are two problems with these methods of finding evidence:

  • Use the fact-checking article itself as evidence: In real scenarios, this approach is unrealistic. If we want to launch a fact-checking system, when the system needs to verify new claims, there is often no fact-checking yet. The article appears. This way the system cannot learn how to collect evidence.
  • Use Google snippets as evidence: This approach overcomes the above problems and is closer to the real scene. Fact checkers often need to rely on search engines to find relevant information. However, this method also has disadvantages, that is, the amount of information is seriously insufficient. As shown in the figure above, Google's rule-based summary cannot provide sufficient information to help us judge the authenticity of the statement.

In response to the problems mentioned above, we built CHEF. CHEF has the following characteristics:

  • Usage Real-world claims, simultaneously in Chinese, fill the gap in Chinese fact-checking datasets.
  • Use the documents returned by the search engine as original evidence to be closer to the real scene.
  • Use human annotations to return relevant sentences of the document as fine-grained evidence, which can be used to train the verification system to learn how to collect evidence.

3. Data set construction

The construction of the data set consists of 4 parts: Data collection, statement annotation, evidence retrieval and data verification.

3.1 Data Collection

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

The original statement was mainly crawled from four Chinese fact-checking websites (according to the Duke News Platform ), of which there are two in Simplified Chinese: China Rumor Refuting Center and Tencent True Truth. Traditional Chinese is from two platforms in Taiwan: MyGoPen and the Taiwan Fact-Checking Center. Since the vast majority (90%) of the claims crawled from fact-checking websites are false, it is actually quite intuitive that most popular rumors/statements are false and will be refuted/verified by the verification platform. Referring to previous methods (PublicHealth [7]), we crawled the titles of China News Network as real claims and constructed a data set with relatively balanced labels.

3.2 Statement annotation

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

Compared with relatively mature foreign fact-checking agencies, the articles published by China’s verification platform are relatively Not so standardized. PolitiFact, for example, will tell you exactly what the claim is, what the verification summary is, and what the evidence and reasoning details are (as shown in the image above). However, Chinese articles generally do not clearly indicate this, so we ask the annotators to read the article and extract the statement verified by the article. At the same time, the statement is also cleaned to reduce the bias it contains.

Previous work has shown [8] that the statements in the fact-checking data set contain relatively strong biases (for example, untrue statements generally have negative words), and PLMs such as BERT can pass By capturing these biases directly, claims can be verified without evidence. Cleaning methods include changing rhetorical questions into declarative sentences and removing some words that may be biased, such as: heavy, shocking, etc. After extracting the claims, we also asked annotators to label the claims based on fact-checking articles. We adopt a classification similar to that of a series of works such as FEVER, using three classifications: Supported, Refuted and Not enough information (NEI). Among them, Refuted is the largest and NEI is the smallest.

3.3 Evidence retrieval

We use the statement as a query statement to query Google search, and then filter out some documents, some of which are documents after the statement was published, and the other part is Documents from false news dissemination platforms, and the top 5 documents are retained at the end. The annotators were then asked to select up to 5 sentences as evidence for each statement.

The statistics for the claims and evidence in the dataset are as follows: The average length of the documents returned for each claim is 3691 words, where the sentence in which the annotator extracted the last fine-grained evidence contains 126 words, or an average of 68 words using Google's rule-based snippets. Simply comparing numbers, using returned documents and annotated sentences, provides more contextual information than using summaries directly.

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

3.4 Data Verification

In order to ensure the consistency of labeling, we added a round of data verification and randomly selected 3% of the A total of 310 labeled statements were distributed to 5 annotators for labeling and re-labeling. Fleiss K score reached 0.74, which is slightly higher than FEVER's 0.68 and Snopes[5]'s 0.70, indicating that the quality of data annotation is not inferior to the data sets constructed by previous people. The statement in CHEF is mainly divided into 5 themes: society, public health, politics, science and culture. Unlike European and American fact-checking platforms that focus on the political field, Chinese platforms pay more attention to public health issues, such as the new coronavirus, health care, medical treatment, etc. Another major topic is society, such as: fraud, further education, social events, etc.

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

There are four main challenges in verifying the statement:

  • Evidence collection: nearly 70 % of the claims require relevant evidence to be verified.
  • Expert consultation: Nearly 40% of claims require consultation with experts to obtain relevant information.
  • Numerical reasoning: 18% of claim verifications require numerical reasoning to reach a conclusion.
  • Multimodality: About 8% of claims require non-textual evidence such as pictures, videos, etc.


Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

4. Baseline system

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

Similar to previous classic fact-checking data sets (such as FEVER), the machine learning system needs to first select relevant sentences in a given document as evidence (evidence retrieval) , and then verify the claim against the evidence (claim verification).

Based on the work of thousands of people, this article proposes two major categories of baseline systems: pipeline and joint systems. Pipeline: Evidence retrieval and claim verification are two separate modules. The evidence retrieval is used to extract evidence first, and then the combined claims are handed over to the claim verification module for classification.

  • #Evidence retrieval part: We use 4 different extractors to extract sentences as fine-grained evidence from the returned documents. The first is based on character feature matching: TF-IDF; the second is based on semantic feature matching: we use Chinese BERT and then calculate cosine similarity. The third is mixed features: take the above two features and then use rankSVM to sort. The final baseline system is the classic Google returned snippet.
  • Statement validation part: We use 3 different models. The first one is based on Chinese BERT, splicing the statement and the evidence obtained above and throwing it to BERT for three classifications. The second is an attention-based model that classifies evidence based on claims giving different weights. The third is a graph-based model: we use the SOTA graph model KGAT[9] on FEVER, which can better synthesize different evidence for reasoning.

Joint: The evidence retrieval and claim verification modules are jointly optimized. Three different models are used. The first is the joint model of SOTA on FEVER [10], which uses a multi-task learning framework to learn to label evidence and claims at the same time. The second is to process the evidence extraction as a latent variable [11], and label each sentence of the returned document with 0 or 1. The sentences labeled with 1 will be left as evidence and classified together with the statement, using REINFORCE for training. The third method is similar to the second method, except that it uses HardKuma and the heavy parameter method for joint training [12] instead of using policy gradient.

5. Experimental results

5.1 Main results

The main results of the experiment are shown in the figure below:

  • From the perspective of evidence retrieval: the joint model generally performs better than the pipeline model. The main reason is that the evidence retrieval module can be optimized to find evidence that is more helpful in verifying the claim. On the other hand, using the returned document is always better than using Google snippets, mainly because the document contains richer information. Finally, the evidence effect of directly using human annotations far exceeds the current two major categories of baseline models. Similar to other fact-checking datasets (FEVEROUS), evidence retrieval is a difficulty in verifying claims. How to optimize the evidence retrieval module based on human-labeled evidence is a direction worth studying in the future.
  • From a claim verification perspective: the graph-based model (KGAT) performs better than simple BERT-based and attention-based models by constructing graphs to capture evidential reasoning Chains are the effective method. But on the other hand, the improvement of the graph model is not particularly obvious, and some optimization based on local conditions may be needed for the data set itself.

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

5.2 The number of fine-grained evidence

The number of fine-grained evidence is not the more the better, as shown below As shown, when we select 5 sentences as fine-grained evidence, the evidence extractor in the pipeline system achieves the best effect. When 10 and 15 sentences are extracted as evidence, the effect becomes worse and worse. We The guess is that a lot of noise is introduced into the extracted sentences, which affects the judgment of the statement verification model.

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

5.3 Impact of declaration length

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

Most declarations are greater than 10 The longer the word length, the better the model effect. We guess the main reason is that the statement is more detailed, and it is easier to collect detailed evidence to help the model make judgments. When the statement length is relatively short, the gap between the centralized baseline models is not very large. When the statement length is relatively long, the better the evidence obtained, the better the statement verification effect, which also illustrates the importance of evidence retrieval.

5.4 The impact of the claimed field

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields


Claims from the scientific field are the most difficult to verify, and the model effects are basically No more than 55. On the one hand, it is more difficult to collect relevant evidence, and on the other hand, statements on scientific issues are relatively complex and often require implicit reasoning to obtain results.

5.5 Impact of declaration categories

Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields

As shown in the figure, even if we introduce partial Supported declarations, the entire data set There is still a problem of class imbalance. The model's effect on the NEI category is much weaker than the Supported and Refuted categories. Future work can study how to adjust the claim verification model for category-imbalanced fact-checking data sets, or use data augmentation methods to randomly increase the number of NEIs during the training process. For example, FEVEROUS [13] randomly increases the number of NEIs during the training process. Throw away the evidence for some claims and change the category of those claims to NEI.

The above is the detailed content of Tsinghua, Cambridge, and UIC jointly launch the first Chinese fact-checking data set: based on evidence, covering medical society and other fields. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template