Netizens are curious whether Mathstral can solve the problem of "who is bigger, 9.11 or 9.9?"
Yesterday, the AI circle was overwhelmed by a simple question like "Who is bigger, 9.11 or 9.9?" Big language models including OpenAI GPT-4o, Google Gemini, etc. all overturned.
This allows us to see that large language models cannot understand and give correct answers like humans when dealing with some numerical problems. For numbers and complex mathematical problems, special models are more specialized. Today, the French large model unicorn Mistral AI released a 7B large model "Mathstral" that focuses on mathematical reasoning and scientific discovery to solve advanced mathematical problems that require complex, multi-step logical reasoning. This model is built based on Mistral 7B, supports a context window length of 32k, and follows the open source agreement Apache 2.0 license. Mathstral was built to pursue an excellent performance-speed tradeoff, a development philosophy that Mistral AI actively promotes, especially with its fine-tuning capabilities.
At the same time, Mathstral is an instruction model, which can be used or fine-tuned. Model weights have been placed on HuggingFace. - Model weights: https://huggingface.co/mistralai/mathstral-7B-v0.1
The picture below shows the MMLU performance difference between Mathstral 7B and Mistral 7B (press subject division). Mathstral achieves state-of-the-art inference performance at its scale on a variety of industry standard benchmarks. Especially on the MATH data set, it achieved a pass rate of 56.6% and a pass rate of 63.47% on MMLU.
At the same time, Mathstral’s pass rate on MATH (56.6%) is more than 20% higher than Minerva 540B. Additionally, Mathstral scored 68.4% on MATH with majority voting @64 and 74.6% using the reward model. This result also made netizens curious whether Mathstral can solve the problem of "who is bigger, 9.11 or 9.9?"
Code large model: Codestral Mamba
- Model weights: https://huggingface.co/mistralai/mamba-codestral-7B-v0.1
with Released together with Mathstral 7B, there is also a Codestral Mamba model specifically used for code generation, which uses the Mamba2 architecture and also follows the Apache 2.0 license open source agreement. This is a guidance model with more than 7 billion parameters that researchers can use, modify and distribute for free. It is worth mentioning that Codestral Mamba was designed with the help of Mamba authors Albert Gu and Tri Dao. For a long time, the Transformer architecture has supported half of the AI field. However, unlike Transformer, the Mamba model has the advantage of linear time reasoning and can theoretically model sequences of infinite length. The architecture allows users to interact with the model extensively and responsively without being limited by input length. This efficiency is especially important for code generation. In benchmark testing, Codestral Mamba outperformed competing open source models CodeLlama 7B, CodeGemma-1.17B and DeepSeek in the HumanEval test. Mistral tested the model, which is available for free on Mistral’s a la Plateforme API, and can handle inputs of up to 256,000 tokens – twice as much as OpenAI’s GPT-4o. With the release of Codestral Mamba, some netizens have used it in VSCode, and it is very smooth.
https://mistral.ai/news/mathstral/https://mistral.ai/news/codestral-mamba/ The above is the detailed content of Mistral AI two consecutive releases: 7B mathematical reasoning dedicated, Mamba2 architecture code large model. For more information, please follow other related articles on the PHP Chinese website!