Home > Technology peripherals > AI > body text

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

PHPz
Release: 2023-11-18 15:34:00
forward
946 people have browsed it

Large Language Model (LLM) compression has always attracted much attention. Post-training Quantization is one of the commonly used algorithms. However, most of the existing PTQ methods are integer quantization, and when the number of bits Below 8, the accuracy of the quantized model will drop significantly. Compared with Integer (INT) quantization, Floating Point (FP) quantization can better represent long-tail distributions, so more and more hardware platforms are beginning to support FP quantization. This article gives a solution to FP quantification of large models. Article published at EMNLP 2023.

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

  • ##Paper address: https://arxiv.org/abs/2310.16836
  • Code address: https://github.com/nbasyl/LLM-FP4

To understand this article, you must first Have basic knowledge about Floating Point Format and Floating Point Quantization. First, Floating Point Number can be expressed by the following formula:

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

s represents the sign bit, m represents the mantissa bits, and e represents the exponent bits. p is a value between 0 and 2^e - 1, used to indicate which exponential interval the current number should be divided into, d takes a value of 0 or 1, used to indicate the i-th mantissa bit. b is bias, an integer value used to adjust the exponent interval.

In the following sections, we will explain how floating point quantification works. First, the input values ​​must go through a step called "scale and clip." This step first clips the input value to the maximum range that floating point numbers can represent (±Qmax). The specific calculation formula is as follows:

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.


You can see that similar to integer quantization, FP quantization will also add a full-precision scaling factor to scale the input to an appropriate interval. When calculating matrix multiplication, the scaling factor is calculated separately from the low-bit matrix multiplication, so it does not cause a large overhead. After incorporating this full-precision scaling factor, different quantized tensors can be clipped to different maximum and minimum value intervals accordingly. In actual use, the required quantization interval will be determined based on the value range of the input tensor, and then the corresponding bias will be derived using formula (4). Note that bias in equation (4) can be used as a scaling factor for real values, see equation (2)(3).

The next step in floating-point quantization is to assign the values ​​in the determined quantization interval to the corresponding quantization interval. This process is called comparison and quantization:

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

#The above figure intuitively illustrates the quantization process. The current input value is quantized into different quantization intervals after being compared with Formula 5.

After obtaining the quantized activation and weight, the scaling factor here is calculated first as mentioned before, and the following efficient matrix multiplication is achieved to complete the acceleration of matrix multiplication:

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

Then this article points out that the accuracy of FP quantization is closely related to the setting of exponent bits and the quantization interval.

In previous papers, it has been verified that there are huge differences in quantization errors between different FP formats (ie, exponent bit/mantissa bit settings of floating point numbers). Only when the appropriate FP format is chosen, FP quantization can represent long-tail distributions better than INT quantization

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.#

This article proposes a solution, which is to use a search-based floating-point quantization algorithm to determine the most suitable exponent and mantissa bit settings for floating-point numbers and the corresponding quantization interval in a comprehensive search manner

In addition, in various types of Transformer models (Bert, LLaMA, ViT), there is another phenomenon that seriously affects the difficulty of quantification: that is, different channels in the activation of the model The order of magnitude difference between them is very large, while the order of magnitude between the same channels is very consistent. Previous studies LLM.int8 and SmoothQuant also found similar phenomena, but this article points out that this phenomenon not only exists in LLM, but also found similar activation distributions in other Transformer models (shown below, LLaMA, BERT and DeIT-S) Phenomenon:

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

As you can see from the figure, those abnormally large channels are much larger than the remaining channels, so in the process of quantifying the activation tensor , the quantization accuracy will be largely determined by these outliers, thereby suppressing the quantization range of other channel values, and ultimately reducing the overall impact on quantization accuracy. This will cause the final result of quantization to collapse, especially when the number of bits drops to a certain level. It is worth noting that only tensor-wise and token-wise quantization can extract the scaling factor during efficient matrix multipilication, while channel-wise quantization does not support efficient matrix multipilication, as shown in the figure below.

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

In order to simultaneously solve the problem and maintain efficient matrix multiplication, this paper uses a small amount of rectified data sets to pre-compute activations Maximum value for each channel and calculate the scaling factor. The scaling factor is then split into a real number for each tensor multiplied by a power of 2 for each channel. This power of 2 can be represented by the exponential deviation in FP. The entire process can be expressed by the following formula:

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

Further, after the calibration is completed, the per-channel exponent bias is It no longer changes, so it can be pre-computed together with weight quantization to integrate this per-channel exponent bias into the quantized weights to improve the quantization accuracy. The complete process is as follows:

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

After the pre-offset, the full-precision offset of each channel in the original activation function can be observed The position becomes a tensor-based real scaling factor, and the decomposed integer offset is moved to the position of the original integer offset in the weight. See Formula 4

for details. This method (pre-shifted exponent bias) can better improve the quantization accuracy while maintaining the principle of efficient matrix multiplication. The intuitive display of the method is as shown in the figure below:

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

Finally, this article shows the Floating Point Quantization (FPQ) method. On LLaMA, BERT and ViTs models, 4-bit quantization has achieved results far exceeding SOTA. In particular, this article shows that the 4-bit quantized LLaMA-13B model achieves an average score of 63.1 on the zero-shot inference task, which is only 5.8 points lower than the full precision model and has a higher smoothing amount than the previous SOTA method. 12.7, which is currently one of the few known feasible 4-bit quantization schemes.

The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.

The above is the detailed content of The first LLM that supports 4-bit floating point quantization is here to solve the deployment problems of LLaMA, BERT, etc.. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!