谷歌Gemini1.5火速上線:MoE架構,100萬上下文

WBOY
發布: 2024-02-16 18:50:10
轉載
478 人瀏覽過

今天,Google宣布推出 Gemini 1.5。

Gemini 1.5是在Google基礎模型和基礎設施的研究與工程創新基礎上開發的。這個版本引入了新的專家混合(MoE)架構,以提高Gemini 1.5的訓練和服務的效率。

Google推出的是早期測試的Gemini 1.5的第一個版本,即Gemini 1.5 Pro。它是一種中型多模態模型,主要針對多種任務進行了擴展優化。與Google最大的模型1.0 Ultra相比,Gemini 1.5 Pro的性能水準相似,並引入了突破性的實驗特徵,能夠更好地理解長上下文。

Gemini 1.5 Pro的token上下文視窗數量為128,000個。然而,Google從今天開始,為少數開發人員和企業客戶提供了AI Studio和Vertex AI的私人預覽版,允許他們在最多1,000,000個token的上下文視窗中進行嘗試。此外,Google還進行了一些優化,旨在改善延遲、減少計算要求並提升用戶體驗。

Google CEO Sundar Pichai 和Google DeepMind CEO Demis Hassabis 對新模型進行了專門介紹。

谷歌Gemini1.5火速上線:MoE架構,100萬上下文

# This

Gemini 1.5 builds on Google’s leading research into Transformer and MoE architectures. The traditional Transformer acts as one large neural network, while the MoE model is divided into smaller "expert" neural networks.

Depending on the type of input given, the MoE model learns to selectively activate only the most relevant expert paths in its neural network. This specialization greatly increases the efficiency of the model. Google has been an early adopter and pioneer of deep learning MoE technology through research on sparse gated MoE, GShard-Transformer, Switch-Transformer, M4, and more.

Google’s latest innovations in model architecture enable Gemini 1.5 to learn complex tasks faster and maintain quality, while training and serving more efficiently. These efficiencies are helping Google teams iterate, train, and deliver more advanced versions of Gemini faster than ever before, and are working on further optimizations.

Longer context, more useful features

" of artificial intelligence models" "Context windows" are composed of tokens, which are the building blocks for processing information. A token can be an entire part or subpart of text, image, video, audio, or code. The larger the model's context window, the more information it can receive and process in a given prompt, making its output more consistent, relevant, and useful.

Through a series of machine learning innovations, Google has increased the context window capacity of 1.5 Pro well beyond the original 32,000 tokens of Gemini 1.0. The large model can now run in production with up to 1 million tokens.

This means the 1.5 Pro can handle large amounts of information at once, including 1 hour of video, 11 hours of audio, over 30,000 lines of code, or a code base of over 700,000 words . In Google's research, up to 10 million tokens were also successfully tested.

Complex reasoning about large amounts of information

1.5 Pro Can perform within a given prompt Seamlessly analyze, categorize and summarize large amounts of content. For example, when given a 402-page transcript of the Apollo 11 moon landing mission, it could reason about dialogue, events, and details throughout the document.
谷歌Gemini1.5火速上線:MoE架構,100萬上下文
# This Gemini 1.5 Pro can understand, reasoning and identifying the curiosity details in the 402 pages of Apollo 11th moon landing mission.

Better understanding and reasoning across modalities

1.5 Pro can perform highly complex understanding and reasoning tasks across different modalities, including video. For example, when given a 44-minute silent film by Buster Keaton, the model could accurately analyze various plot points and events, even reasoning about small details in the film that were easily overlooked.
Gemini 1.5 Pro can understand, reason about, and identify curious details in the 402 pages of records from the Apollo 11 moon landing mission.

Better understanding and reasoning across modalities

1.5 Pro Can perform highly complex understanding and reasoning tasks on different modalities including video. For example, when given a 44-minute silent film by Buster Keaton, the model could accurately analyze various plot points and events, even reasoning about small details in the film that were easily overlooked.
Gemini 1.5 Pro can understand, reason about, and identify curious details in the 402 pages of records from the Apollo 11 moon landing mission.

Better understanding and reasoning across modalities

1.5 Pro Can perform highly complex understanding and reasoning tasks on different modalities including video. For example, when given a 44-minute silent film by Buster Keaton, the model could accurately analyze various plot points and events, even reasoning about small details in the film that were easily overlooked.
谷歌Gemini1.5火速上線:MoE架構,100萬上下文The Gemini 1.5 Pro could identify 44 minutes of scenes from Buster Keaton's silent films when given simple line drawings as reference material for real-life objects.

Use longer code blocks to solve related problems

1.5 Pro can span longer Long code blocks perform more relevant problem-solving tasks. When given hints on more than 100,000 lines of code, it can better reason through examples, suggest useful modifications, and explain how different parts of the code work. 谷歌Gemini1.5火速上線:MoE架構,100萬上下文
off ‐ toward 1.5 ’s s to 1.5's- to 1.5’s- to 1.5’s 1.5’s 1.5’s to 1.5G-1.5G to 1.5G, Gemini 1.5 Pro, and Gemini 1.5 #EnhancedPerformance

When tested on a comprehensive panel of text, code, image, audio, video evaluation, 1.5 Pro was used to develop large language models (LLM). ), 87% of the benchmarks performed better than 1.0 Pro. Compared to the 1.0 Ultra in the same benchmarks, it performs roughly similarly.

Gemini 1.5 Pro maintains a high level of performance even as the context window increases.

In the NIAH assessment, where a small piece of text containing a specific fact or statement was intentionally placed within a very long block of text, 1.5 Pro found the embedding 99% of the time The text of , there are only 1 million tokens in the data block.

Gemini 1.5 Pro also demonstrates impressive "in-context learning" skills, meaning it can learn from long prompts Learn new skills from information without the need for additional fine-tuning. Google tested this skill on the MTOB (Translation from One Book) benchmark, which shows the model's ability to learn from information it has never seen before. When given a grammar manual for Kalamang, a language with fewer than 200 speakers worldwide, the model can learn to translate English into Kalamang at a level similar to a human learning the same content.

Since 1.5 Pro’s long context window is a first for a large model, Google is constantly developing new evaluations and benchmarks to test its novel features.

For more details, see the Gemini 1.5 Pro Technical Report.

Technical report address: https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf

Build and experiment with Gemini models

Google is committed to responsibly bringing each new generation of Gemini models to billions of people around the world Used by people, developers and enterprise users.

Starting today, Google is making 1.5 Pro preview available to developers and enterprise customers through AI Studio and Vertex AI.

In the future, when the model goes to wider release, Google will launch 1.5 Pro with a standard 128,000 token context window. Soon, Google plans to introduce pricing tiers starting with the standard 128,000 context windows and scaling up to 1 million tokens as it improves the model.

Early testers can try 1 million token context windows for free during testing, and significant speed improvements are coming.

Developers interested in testing 1.5 Pro can register now in AI Studio, while enterprise customers can contact their Vertex AI account team.

Reference link: https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/ #sundar-note

以上是谷歌Gemini1.5火速上線:MoE架構,100萬上下文的詳細內容。更多資訊請關注PHP中文網其他相關文章!

相關標籤:
來源:jiqizhixin.com
本網站聲明
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn
最新問題
最新下載
更多>
網站特效
網站源碼
網站素材
前端模板
關於我們 免責聲明 Sitemap
PHP中文網:公益線上PHP培訓,幫助PHP學習者快速成長!