Skip to content

Zamba

This model was released on 2024-04-16 and added to Hugging Face Transformers on 2024-10-04.

PyTorch

Zamba (blog post) is a large language model (LLM) trained by Zyphra, and made available under an Apache 2.0 license. Please see the Zyphra Hugging Face repository for model weights.

This model was contributed by pglo.

Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using next-token prediction. Zamba uses a shared transformer layer after every 6 mamba blocks. It uses the Mistral v0.1 tokenizer. We came to this architecture after a series of ablations at small scales. Zamba-7B-v1 was pre-trained on 1T tokens of text and code data.

<img src=https://github.com/user-attachments/assets/c2cff209-b901-483c-87aa-774b82a0769f width=30% height=40% />

Zamba requires you use transformers version 4.46.0 or higher:

Terminal window
pip install transformers>=4.45.0

In order to run optimized Mamba implementations, you first need to install mamba-ssm and causal-conv1d:

Terminal window
pip install mamba-ssm causal-conv1d>=1.2.0

You also have to have the model on a CUDA device.

You can run the model not using the optimized Mamba kernels, but it is not recommended as it will result in significantly lower latencies. In order to do that, you’ll need to specify use_mamba_kernels=False when loading the model.

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba-7B-v1")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba-7B-v1", device_map="auto", dtype=torch.bfloat16)
input_text = "A funny prompt would be "
input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**input_ids, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))

The model cards can be found at:

For issues with model output, or community discussion, please use the Hugging Face community forum

The model weights are open-sourced via an Apache 2.0 license.

[[autodoc]] ZambaConfig

[[autodoc]] ZambaModel - forward

[[autodoc]] ZambaForCausalLM - forward

[[autodoc]] transformers.ZambaForSequenceClassification - forward