Zamba
This model was released on 2024-04-16 and added to Hugging Face Transformers on 2024-10-04.
Zamba (blog post) is a large language model (LLM) trained by Zyphra, and made available under an Apache 2.0 license. Please see the Zyphra Hugging Face repository for model weights.
This model was contributed by pglo.
Model details
Section titled “Model details”Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using next-token prediction. Zamba uses a shared transformer layer after every 6 mamba blocks. It uses the Mistral v0.1 tokenizer. We came to this architecture after a series of ablations at small scales. Zamba-7B-v1 was pre-trained on 1T tokens of text and code data.
<img src=https://github.com/user-attachments/assets/c2cff209-b901-483c-87aa-774b82a0769f width=30% height=40% />
Quick start
Section titled “Quick start”Presequities
Section titled “Presequities”Zamba requires you use transformers version 4.46.0 or higher:
pip install transformers>=4.45.0In order to run optimized Mamba implementations, you first need to install mamba-ssm and causal-conv1d:
pip install mamba-ssm causal-conv1d>=1.2.0You also have to have the model on a CUDA device.
You can run the model not using the optimized Mamba kernels, but it is not recommended as it will result in significantly lower latencies. In order to do that, you’ll need to specify use_mamba_kernels=False when loading the model.
Inference
Section titled “Inference”from transformers import AutoTokenizer, AutoModelForCausalLMimport torch
tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba-7B-v1")model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba-7B-v1", device_map="auto", dtype=torch.bfloat16)
input_text = "A funny prompt would be "input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**input_ids, max_new_tokens=100)print(tokenizer.decode(outputs[0]))Model card
Section titled “Model card”The model cards can be found at:
Issues
Section titled “Issues”For issues with model output, or community discussion, please use the Hugging Face community forum
License
Section titled “License”The model weights are open-sourced via an Apache 2.0 license.
ZambaConfig
Section titled “ZambaConfig”[[autodoc]] ZambaConfig
ZambaModel
Section titled “ZambaModel”[[autodoc]] ZambaModel - forward
ZambaForCausalLM
Section titled “ZambaForCausalLM”[[autodoc]] ZambaForCausalLM - forward
ZambaForSequenceClassification
Section titled “ZambaForSequenceClassification”[[autodoc]] transformers.ZambaForSequenceClassification - forward