Home > Technology peripherals > AI > Jamba 1.5: Featuring the Hybrid Mamba-Transformer Architecture

Jamba 1.5: Featuring the Hybrid Mamba-Transformer Architecture

William Shakespeare
Release: 2025-03-19 11:15:13
Original
533 people have browsed it

Jamba 1.5: A Powerful Hybrid Language Model for Long-Context Processing

Jamba 1.5, a cutting-edge large language model from AI21 Labs, boasts impressive capabilities for handling extensive text contexts. Available in two versions – Jamba 1.5 Large (94 billion parameters) and Jamba 1.5 Mini (12 billion parameters) – it leverages a unique hybrid architecture combining the Mamba Structured State Space Model (SSM) with the traditional Transformer architecture. This innovative approach enables processing of an unprecedented 256K effective context window, a significant leap for open-source models.

Jamba 1.5: Featuring the Hybrid Mamba-Transformer Architecture

Key Features and Capabilities:

  • Massive Context Window: Processes up to 256K tokens, ideal for lengthy documents and complex tasks.
  • Hybrid Architecture: Combines the strengths of Transformer and Mamba models for optimal efficiency and performance.
  • Efficient Quantization: Employs ExpertsInt8 quantization for reduced memory footprint and faster processing.
  • Multilingual Support: Functions effectively across nine languages: English, Spanish, French, Portuguese, Italian, Dutch, German, Arabic, and Hebrew.
  • Versatile Applications: Suitable for a wide range of NLP tasks, including question answering, summarization, text generation, and classification.
  • Accessible Deployment: Available via AI21's Studio API, Hugging Face, and cloud partners.

Architectural Details:

Jamba 1.5: Featuring the Hybrid Mamba-Transformer Architecture

Aspect Details
Base Architecture Hybrid Transformer-Mamba architecture with a Mixture-of-Experts (MoE) module
Model Variants Jamba-1.5-Large (94B active parameters, 398B total) and Jamba-1.5-Mini (12B active parameters, 52B total)
Layer Composition 9 blocks, each with 8 layers; 1:7 ratio of Transformer to Mamba layers
Mixture of Experts (MoE) 16 experts, selecting the top 2 per token
Hidden Dimensions 8192
Attention Heads 64 query heads, 8 key-value heads
Context Length Up to 256K tokens
Quantization Technique ExpertsInt8 for MoE and MLP layers
Activation Function Integrated Transformer and Mamba activations
Efficiency Optimized for high throughput and low latency on 8x80GB GPUs

Accessing and Utilizing Jamba 1.5:

Jamba 1.5 is readily accessible through AI21's Studio API and Hugging Face. The model can be fine-tuned for specific domains to further enhance performance. A Python example using the AI21 API is provided below:

Python Example:

from ai21 import AI21Client
from ai21.models.chat import ChatMessage

messages = [ChatMessage(content="What's a tokenizer in 2-3 lines?", role="user")]
client = AI21Client(api_key='') # Replace '' with your API key
response = client.chat.completions.create(
    messages=messages,
    model="jamba-1.5-mini",
    stream=True
)
for chunk in response:
    print(chunk.choices[0].delta.content, end="")
Copy after login

Jamba 1.5: Featuring the Hybrid Mamba-Transformer Architecture Jamba 1.5: Featuring the Hybrid Mamba-Transformer Architecture Jamba 1.5: Featuring the Hybrid Mamba-Transformer Architecture

Conclusion:

Jamba 1.5 represents a significant advancement in large language models, offering a compelling blend of power and efficiency. Its ability to handle exceptionally long contexts, coupled with its versatile applications and accessible deployment options, makes it a valuable tool for a wide range of NLP tasks.

Frequently Asked Questions (FAQs): (Similar to the original, but rephrased for conciseness)

  • Q1: What is Jamba 1.5? A: A hybrid Transformer-Mamba large language model with 94B (Large) or 12B (Mini) parameters, optimized for instruction following and long-context processing.
  • Q2: How does Jamba 1.5 handle long contexts efficiently? A: Through its hybrid architecture and ExpertsInt8 quantization, enabling a 256K token context window with reduced memory usage.
  • Q3: What is ExpertsInt8 quantization? A: A compression technique using INT8 precision in MoE and MLP layers for improved efficiency.
  • Q4: Is Jamba 1.5 publicly available? A: Yes, under the Jamba Open Model License, accessible via Hugging Face.

The above is the detailed content of Jamba 1.5: Featuring the Hybrid Mamba-Transformer Architecture. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template