Home > Technology peripherals > AI > body text

Microsoft turned GPT-4 into a medical expert with just the 'Prompt Project'! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

王林
Release: 2023-12-04 14:25:45
forward
1133 people have browsed it

Microsoft's latest research once again proves the power of Prompt Project -

Without additional fine-tuning or expert planning, GPT-4 can become an "expert" with just prompts.

Using the latest prompting strategy they proposed Medprompt, in the medical professional field, GPT-4 achieved the best results in the nine test sets of MultiMed QA.

On the MedQA data set (United States Medical Licensing Examination questions), Medprompt made GPT-4's accuracy exceed 90% for the first time, surpassed BioGPT and Med-PaLM Waiting for a number of fine-tuning methods.

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

The researchers also stated that the Medprompt method is universal and is not only applicable to medicine, but can also be extended to electrical engineering, machine learning, law and other majors.

As soon as this study was shared on X (formerly Twitter), it attracted the attention of many netizens.

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

Wharton School professor Ethan Mollick, Artificial Intuition author Carlos E. Perez, etc. have all forwarded and shared it.

Carlos E. Perez said that "an excellent prompting strategy can take a lot of fine-tuning":

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

Some netizens said that they have had this premonition for a long time. , it’s really cool to see the results coming out now!

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

Some netizens think this is really "radical"

GPT-4 is a technology that can change the industry, but we are still far away The limits of the prompts have not been hit, nor have the limits of fine tuning been reached.

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

Combined prompt strategies, "transform" into an expert

Medprompt is a combination of multiple prompt strategies, including three magic weapons:

  • Dynamic few-shot selection
  • Self-generated chain of thought
  • Choice shuffling ensemble )

Next, we will introduce them one by one

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

Dynamic few-sample selection

Few-sample learning is to make the model fast An effective way to learn context. Simply put, input some examples, let the model quickly adapt to a specific domain, and learn to follow the format of the task.

This kind of few-sample examples used for specific task prompts are usually fixed, so there are high requirements for the representativeness and breadth of the examples.

The previous method was to let domain expertsmanually produce examples, but even so, there is no guarantee that the fixed few-sample examples curated by experts are representative in each task.

Microsoft researchers proposed a method of dynamic few-shot examples, so

The idea is that the task training set can be used as a source of few-shot examples, and if the training set is large enough, then it can Select different few-shot examples for different task inputs.

In terms of specific operations, the researchers first used the text-embedding-ada-002 model to generate vector representations for each training sample and test sample. Then, for each test sample, by comparing the similarity of the vectors, the k most similar samples are selected from the training samples

Compared with the fine-tuning method, dynamic few-shot selection makes use of the training data, But it doesn't require extensive updates to model parameters.

Self-generated chain of thinking

The chain of thinking (CoT) method is a method that lets the model think step by step and generate a series of intermediate reasoning steps

Previous methods relied on experts Manually write some examples with prompt thought chains

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

Here, the researchers found that GPT-4 can be simply asked to generate thought chains for training examples using the following prompt:

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

But the researchers also pointed out that this automatically generated thinking chain may contain wrong reasoning steps, so they set up a verification tag as a filter, which can effectively reduce errors.

Compared with the thinking chain examples hand-crafted by experts in the Med-PaLM 2 model, the basic principles of the thinking chain generated by GPT-4 are longer, and the step-by-step reasoning logic is more fine-grained.

Option Shuffling Integration

GPT-4 may have a bias when dealing with multiple choice questions, that is, it tends to always choose A or always choose B, no matter what the content of the option is. , this is the position deviation

In order to solve this problem, the researchers decided to rearrange the order of the original options to reduce the impact. For example, the original order of options is ABCD, which can be changed to BCDA, CDAB, etc.

Then let GPT-4 do multiple rounds of predictions, using a different order of options in each round. This "forces" GPT-4 to consider the content of the options.

Finally, vote on the results of multiple rounds of predictions and choose the most consistent and correct option.

The combination of the above prompt strategies is Medprompt. Let’s take a look at the test results.

Multiple Test Optimal

In the test, the researchers used the MultiMed QA evaluation benchmark.

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

GPT-4, which uses the Medprompt prompting strategy, achieved the highest scores in all nine benchmark data sets of MultiMedQA, better than Flan-PaLM 540B and Med-PaLM 2.

In addition, the researchers also discussed the performance of the Medprompt strategy on "Eyes-Off" data. The so-called "Eyes-Off" data refers to data that the model has never seen during the training or optimization process. It is used to test whether the model is overfitting the training data

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

Results GPT-4 combined with the Medprompt strategy performed well on multiple medical benchmark data sets, with an average accuracy of 91.3%.

The researchers conducted ablation experiments on the MedQA dataset to explore the relative contributions of the three components to the overall performance

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

In which thought chains are automatically generated The steps play the biggest role in improving performance

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

The score of the thinking chain automatically generated by GPT-4 is higher than the score planned by experts in Med-PaLM 2, and does not require Manual intervention

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

#Finally, the researchers also explored Medprompt’s cross-domain generalization capabilities, using six different datasets from the MMLU benchmark, covering electrical engineering , machine learning, philosophy, professional accounting, professional law and professional psychology issues.

Two additional datasets containing NCLEX (National Nurse Licensing Examination) questions have also been added.

The results show that the effect of Medprompt on these data sets is similar to the improvement on the MultiMedQA medical data set, with the average accuracy increased by 7.3%.

Microsoft turned GPT-4 into a medical expert with just the Prompt Project! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time

Please click the following link to view the paper: https://arxiv.org/pdf/2311.16452.pdf

The above is the detailed content of Microsoft turned GPT-4 into a medical expert with just the 'Prompt Project'! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!