Skip to content

Zamba2

This model was released on 2024-11-22 and added to Hugging Face Transformers on 2025-01-27.

PyTorch FlashAttention SDPA

Zamba2 is a large language model (LLM) trained by Zyphra, and made available under an Apache 2.0 license. Please see the Zyphra Hugging Face repository for model weights.

This model was contributed by pglo.

Zamba2-1.2B, Zamba2-2.7B and Zamba2-7B are hybrid models combining state-space models (Specifically Mamba2) and transformer, and were trained using next-token prediction. Zamba2 uses shared transformer layers after every 6 mamba blocks. It uses the Mistral v0.1 tokenizer. We came to this architecture after a series of ablations at small scales. Zamba2-1.2B, Zamba2-2.7B and Zamba2-7B were pre-trained on 2T and 3T tokens, respectively.

<img src=https://github.com/user-attachments/assets/c2cff209-b901-483c-87aa-774b82a0769f width=30% height=40% />

Zamba2 requires you use transformers version 4.48.0 or higher:

Terminal window
pip install transformers>=4.48.0
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-7B")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-7B", device_map="auto", dtype=torch.bfloat16)
input_text = "What factors contributed to the fall of the Roman Empire?"
input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**input_ids, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))

The model cards can be found at:

For issues with model output, or community discussion, please use the Hugging Face community forum

The model weights are open-sourced via an Apache 2.0 license.

[[autodoc]] Zamba2Config

[[autodoc]] Zamba2Model - forward

[[autodoc]] Zamba2ForCausalLM - forward

[[autodoc]] transformers.Zamba2ForSequenceClassification - forward