Home > Technology peripherals > AI > body text

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

王林
Release: 2024-06-12 11:55:28
Original
728 people have browsed it

Efficiently decode n-token sequences, CLLMs+Jacobi decoding framework.

Traditionally, large language models (LLMs) are thought of as sequential decoders, decoding each token one by one.

A research team from Shanghai Jiao Tong University and the University of California shows that pre-trained LLMs can be easily taught to become efficient parallel decoders and introduces a new family of parallel decoders called coherence Large Language Models (CLLMs) are able to reduce inference latency by efficiently decoding a n-token sequence at each inference step.

In this paper, the research shows that: "Imitating the cognitive process that humans use to express word-for-word expressions after forming complete sentences in their heads can be effectively learned by simply fine-tuning pre-trained LLMs."

Specifically, CLLMs produce decoding sequences with the same results as autoregressive (AR) decoding by mapping any randomly initialized n-token sequence into as few steps as possible. In this way, parallel decoding training can be performed.

Experimental results show that the CLLMs obtained using the method proposed by the research team are very effective, showing that the method obtains a 2.4- to 3.4-fold improvement in generation speed, and is consistent with other fast inference techniques such as Medusa2 Comparable to Eagle, and requires no additional memory cost to accommodate auxiliary model components during inference.

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

  • Paper name: "CLLMs: Consistency Large Language Models"

  • Paper link: https:/ /arxiv.org/pdf/2403.00835

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

##                                                                   Figure 1: CLLM-ABEL when using Jacobi decoding on GSM8K -7B-001 is a demonstration of approximately 3x the speed of baseline ABEL-7B-001.

Jacobi Decoding

Large language models (LLMs) are changing the face of human life, from programming to providing legal and health advice.

However, during the inference process, LLMs use autoregressive decoding to generate responses token by token, as shown in Figure 1, which results in high latency for longer responses. Using autoregressive decoding often requires architectural modifications, auxiliary components, or first draft models to speed up inference by generating multiple tokens at once.

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

图 2 2: Traditional self -regression (AR) decoding schematic diagram: generate one token at a time.

Jacobi decoding is derived from the method of Jacobi and Gauss-Seidel fixed-point iteration for solving nonlinear equations, and has been proven to be exactly the same as autoregressive generation using greedy decoding.

Jacobi decoding reconstructs the sequential generation process into a system of n nonlinear equations containing n variables, and can be solved in parallel based on Jacobi iteration.

Each iteration step may predict multiple correct tokens (the so-called "correct" refers to aligning with the autoregressive decoding results under the greedy sampling strategy), thereby potentially accelerating autoregressive decoding.

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

##                                                                                                                                                                                                                                                                                      .

Specifically, the Jacobi decoding method first randomly guesses the next token of the sequence from the input prompt (hereinafter referred to as the

n-token sequence, unless otherwise stated ).

Then, the

n -token sequence is fed into the LLM along with the hints for iterative updates. This process will continue until the sequence of n-token stabilizes, no longer changes, and reaches a fixed point.

It is worth noting that Jacobi decoding does not require any more queries to the LLM than autoregressive (AR) decoding. Eventually, the sequence of

n-tokens will converge to the output generated by AR decoding under the greedy strategy. The process from the initial random guess to the final AR generated result is called the "Jacobi trajectory."

An example of the Jacobi decoding iteration process and Jacobi trajectory is illustrated in Figure 2.

Limitations of Jacobi decoding:

However, in practice, ordinary Jacobi decoding only slightly improves the acceleration effect of LLMs. For example, the average acceleration ratio is only 1.05 times. This is because it is difficult for LLM to generate correct tokens when there are errors in previous tokens.

Therefore, most Jacobi iterations can only obtain one correction for a sequence of n -tokens, resulting in the longer trajectory shown on the left side of Figure 3.

Look-ahead decoding and speculative decoding methods attempt to alleviate the inefficiencies of Jacobi decoding and traditional autoregressive decoding, but incur additional memory costs during inference.

CLLMs do not require these additional memory costs.

Consistent Large Language Models (CLLMs)

Preliminary Jacobi decoding:

Given a prompt xAnd a pre-trained LLM p(·|x), usually researchers will use the standard autoregressive (AR) decoding method to obtain the response of the model under the greedy strategy, that is: 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Jacobi decoding restructures the LLM inference process as a process of solving a system of nonlinear equations to transform the decoding process into a form that can be calculated in parallel. Considering:

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

The researcher can rewrite the above equation as a nonlinear system of equations:

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Note is:

The process exits at a certain k value such that: 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Then, define 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here as the fixed point, and 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here as the Jacobi trajectory.

To solve this problem, the research team proposed to adjust the pre-trained LLMs so that they can consistently assign any point on the Jacobi trajectory Jy Maps to fixed point y*.

Surprisingly, they found that such a goal is similar to that of the consistency model—a major acceleration method for diffusion models.

In the method proposed by the team, the model is trained using Jacobi trajectories collected from the target model and uses a loss function that encourages single-step convergence during Jacobi iterations.

For each target model to be adjusted to CLLMp, training includes two parts:

(1) Jacobi trajectory preparation:

For each prompt, the author sequentially performs Jacobi decoding on each token truncation until the entire response sequence l is generated, which is equivalent to all consecutive fixed points of series connection.

Each sequence generated along the trajectory is counted as a data entry.

It should be noted here that for I containing N (N ≫ n) tokens Long responses, this truncation avoids slow model evaluation on long inputs.

(2) Training using consistency and AR loss:

The author jointly optimizes the two losses to adjust CLLMs. The consistency loss ensures that multiple tokens are predicted at one time, while the AR loss prevents CLLM Deviate from target LLM to maintain build quality. 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

图 Figure 4: One-Step Swatching of Concentration Training: Adjust the target LLM into any state on the Jacobi trajectory as an input and always predict the fixed point.

Consistency and AR loss:

(1) Consistency loss

Supposep represents the target LLM.

Let 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here be represented as a CLLM with parameters θ initialized to p.

For prompt x and the corresponding Jacobi trajectory J, let y and y* respectively Represents random states and fixed points on the trajectory.

You can prompt CLLM to output y when the input is y* by minimizing the following loss, which is called Global Consistency (GC) loss

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally hereIn this formula, 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

The author uses symbols extensively to represent uniform sampling from the data set.

D(·||·) represents the distance between two distributions. The choice is discussed in the GKD method. In this article, forward KL is mainly used.

Alternatively, use local consistency (LC) loss according to the formula in the consistency model.

Where adjacent states: 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here In the Jacobi trajectory J , is driven to produce the same output:

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

(2) AR loss:

In order to avoid deviating from the distribution of the target LLM, the author combines the generation based on the target LLM p## Traditional AR loss of #l:

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

By combining the two losses together, using some weights

ω, train CLLM The total loss is:

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Experiment

Result:

Total Say, the experiment covers three domain-specific tasks:

(1) Spider (Text to SQL)

(2) Human-Eval (Python code completion) and GSM8k (Math)

(3) Wider open domain session challenge MT-bench.

The reported experiments use fine-tuned encoder LLM, Deepseek-coder-7B-instruct, LLaMA-2-7B or ABEL-7B-001 as the target model, depending on the task.

Training and evaluation are performed on NVIDIA A100 40GB server.

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Figure 5: Acceleration effect of CLLM on different downstream tasks. The results show: "CLLM is significantly faster than the pre-trained model and achieves comparable speedup compared to Medusa, but at no additional cost during inference."

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Figure 6: Comparison diagram between CLLM and other benchmarks on specific domain tasks (Spider, CSN-Python, GSM8k) and MT-bench. CLLM achieves similar or even better speedups in comparison with Medusa2 while introducing no additional inference cost (judged by FLOPS and memory consumption).

Specialized areas:

From Figure 5, it can be seen that compared with other benchmarks (including the original target model, Medusa2 and guess decoding) In comparison, CLLMs achieve the most significant speedup.

Open Domain Session Challenge (MT-bench):

When CLLM trained from LLaMA2-7B using the ShareGPT dataset is combined with lookahead decoding, It is possible to achieve roughly the same speedup as Medusa2 and obtain comparable scores on MT-bench.

However, CLLM is more adaptable and memory efficient because it does not require modifications to the original architecture of the target model and does not require auxiliary components.

Training Cost:

The fine-tuning cost of CLLMs is modest.

For example, for LLaMA-7B, only passing about 1M tokens can achieve its 3.4x speedup on the Spider dataset. In cases where the dataset size is large (such as for CodeSearchNet-Python), only 10% of the dataset needs to be used to generate Jacobi trajectories for training CLLMs, resulting in an approximately 2.5x speedup.

The total number of tokens can be estimated in the following way:

N = average trajectory amount of each prompt × average trajectory length × number of prompts.

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here

Figure 7: Jacobi trajectory comparison between target LLM and CLLM on Spider. Each point on the Jacobi trajectory is a color-coded sequence: correct matches to the AR results are marked in blue, inaccurate ones are marked in red. CLLM exhibits enhanced efficiency, converging to the fixed point 2 times faster than the target LLM. This enhanced efficiency of CLLM can be attributed to the consistency loss, which facilitates the learning of the structure of the n-token sequence for each given prefix.

The left side of Figure 6 shows that the target LLM usually only generates one correct token in one iteration. In contrast, in CLLMs, the authors found the phenomenon of rapid advancement, where multiple consecutive tokens are correctly predicted in a single Jacobi iteration.

In addition, in the target LLM, tokens that are correctly generated in advance (such as "country" and "H" at indexes 6 and 7 on the left side of Figure 7) are often inaccurately replaced in subsequent iterations. .

On the other hand, CLLMs have shown the ability to predict the correct token, ensuring that the token remains unchanged even in the presence of a previous incorrect token.

The author calls such a token a "fixed token". These two phenomena together contribute to the rapid convergence of CLLMs in Jacobi decoding, resulting in considerable generation speed improvements.

The research team also observed that through training, CLLMs acquired a key language concept - collocation: "a series of words or terms that co-occur more frequently than expected by random chance."

Language is not only made up of isolated words, but also relies heavily on specific word pairs. Examples of collocations are abundant in both natural and programming languages.

They include:

  • Verb + preposition combination (such as "talk to", "remind ... of ...")

  • Verb + noun structures (e.g. "make a decision", "catch a cold")

  • Many domain-specific syntactic structures (e.g. "SELECT ... FROM . ..", "if ... else" is used in programming).

The consistency generation goal enables CLLMs to infer such structures from any point in the Jacobi trajectory, facilitating CLLMs to master a large number of collocations and thus predict multiple words simultaneously to minimize iteration steps .

Reference link:

https://hao-ai-lab.github.io/blogs/cllm/

The above is the detailed content of 3 times the generation speed and reduced memory costs, an efficient decoding framework that surpasses Medusa2 is finally here. For more information, please follow other related articles on the PHP Chinese website!

source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact [email protected]
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!