Since the launch of ChatGLM-6B on March 14, 2023, the GLM series models have received widespread attention and recognition. Especially after ChatGLM3-6B was open sourced, developers are full of expectations for the fourth-generation model launched by Zhipu AI. This expectation has finally been fully satisfied with the release of GLM-4-9B.
In order to give small models (10B and below) more powerful capabilities, the GLM technical team spent nearly half a year exploring , launched this new fourth-generation GLM series open source model: GLM-4-9B. This model greatly compresses the model size while ensuring accuracy, and has faster inference speed and higher efficiency. There is no end to the exploration of the GLM technical team, and we will continue to work hard to launch more competitive open source
In the pre-training process, we introduce large The language model performed data screening and finally obtained 10T of high-quality multilingual data. This amount of data is more than three times that of the ChatGLM3-6B model. In addition, we use FP8 technology for efficient pre-training, which improves training efficiency by 3.5 times compared to the third-generation model. Taking into account the user's storage needs, the parameter size of GLM-4-9B has been increased from 6B to 9B. Ultimately, we increased the pre-training computation by 5 times to maximize performance capabilities under limited storage conditions.
GLM-4-9B is a comprehensive technology upgrade tool with more powerful inference performance and better It has the advantages of context processing capabilities, multi-language support, multi-modal processing, and full tool set All Tools calling. These upgrades provide users with more stable, more reliable, and more accurate technical support, and improve users' work efficiency and quality.
The GLM-4-9B series includes multiple versions:
GLM-4- Based on strong pre-training, 9B’s comprehensive ability in Chinese and English has improved by 40% compared to ChatGLM3-6B. In particular, significant improvements have been achieved in the Chinese alignment capability AlignBench, the instruction compliance capability IFeval, and the engineering code processing capability Natural Code Bench. Even when comparing the Llama 3 8B model with more training volume, GLM-4-9B is not inferior at all and leads in English performance. In the field of Chinese subjects, GLM-4-9B has improved by up to 50% [Performance Evaluation chart].
Pictures
The context length of the GLM-4-9B+ model is expanded from 128K At 1M tokens, it means that it can process input of up to 2 million words at the same time, which is equivalent to the length of two "Dream of Red Mansions" or 125 academic papers. The GLM-4-9B-Chat-1M model successfully demonstrated its excellent ability to non-destructively process long text input in the "needle in the haystack" experiment [illustration of long text experiment].
The following are two demo video cases showing long text processing capabilities:
GLM-4-9B+ supports up to 26 languages, including Chinese, English, Russian, etc. We expanded the tokenizer vocabulary size from 65K to 150K, improving coding efficiency by 30%. In multi-language understanding and generation tasks, GLM-4-9B-Chat outperforms Llama-3-8B-Instruct [Multi-language performance comparison chart].
The function calling capability of GLM-4-9B has been improved by 40% compared to the previous generation. On the Berkeley Function-Calling Leaderboard, its Function Call The capabilities are comparable to GPT-4 [Function call performance comparison chart].
The "All Tools" capability means that the model can understand and use various external tools (such as code execution, network browsing, and drawing) etc.) to assist in completing the task. At the Zhipu DevDay on January 16, the GLM-4 model was fully upgraded with All Tools capabilities, which can intelligently call web browsers, code interpreters, CogView and other tools to complete complex requests [All Tools task icon].
GLM-4V-9B As an open source multimodal model of the GLM-4 base, capable of processing high-resolution input, By directly mixing visual and text data for training, it demonstrates significant multi-modal processing effects and is comparable to GPT-4V in performance. It performs very well when identifying and processing complex multi-modal tasks [Multi-modal application example diagram].
Picture
GLM-4-9B has demonstrated its powerful performance in a variety of tasks, It is a major breakthrough in the field of natural language processing. Whether it is academic research or industrial applications, the GLM-4-9B will be your best choice.
We sincerely invite you to join the ranks of GLM-4 users and explore the possibilities brought by this excellent model:
The above is the detailed content of Tsinghua University and Zhipu AI open source GLM-4: launching a new revolution in natural language processing. For more information, please follow other related articles on the PHP Chinese website!