OLMoE
This model was released on 2024-09-03 and added to Hugging Face Transformers on 2024-09-03.
OLMoE is a sparse Mixture-of-Experts (MoE) language model with 7B parameters but only 1B parameters are used per input token. It has similar inference costs as dense models but trains ~3x faster. OLMoE uses fine-grained routing with 64 small experts in each layer and uses a dropless token-based routing algorithm.
You can find all the original OLMoE checkpoints under the OLMoE collection.
Click on the OLMoE models in the right sidebar for more examples of how to apply OLMoE to different language tasks.
The example below demonstrates how to generate text with Pipeline or the AutoModel class.
import torchfrom transformers import pipeline
pipe = pipeline( task="text-generation", model="allenai/OLMoE-1B-7B-0125", dtype=torch.float16, device=0,)
result = pipe("Dionysus is the god of")print(result)import torchfrom transformers import AutoModelForCausalLM, AutoTokenizerfrom accelerate import Accelerator
device = Accelerator().device
model = AutoModelForCausalLM.from_pretrained("allenai/OLMoE-1B-7B-0924", attn_implementation="sdpa", dtype="auto", device_map="auto").to(device)tokenizer = AutoTokenizer.from_pretrained("allenai/OLMoE-1B-7B-0924")
inputs = tokenizer("Bitcoin is", return_tensors="pt")inputs = {k: v.to(device) for k, v in inputs.items()}output = model.generate(**inputs, max_length=64)print(tokenizer.decode(output[0]))Quantization
Section titled “Quantization”Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends. The example below uses bitsandbytes to only quantize the weights to 4-bits.
import torchfrom transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfigfrom accelerate import Accelerator
device = Accelerator().device
quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4")
model = AutoModelForCausalLM.from_pretrained("allenai/OLMoE-1B-7B-0924", attn_implementation="sdpa", dtype="auto", device_map="auto", quantization_config=quantization_config).to(device)tokenizer = AutoTokenizer.from_pretrained("allenai/OLMoE-1B-7B-0924")
inputs = tokenizer("Bitcoin is", return_tensors="pt")inputs = {k: v.to(device) for k, v in inputs.items()}output = model.generate(**inputs, max_length=64)print(tokenizer.decode(output[0]))OlmoeConfig
Section titled “OlmoeConfig”[[autodoc]] OlmoeConfig
OlmoeModel
Section titled “OlmoeModel”[[autodoc]] OlmoeModel - forward
OlmoeForCausalLM
Section titled “OlmoeForCausalLM”[[autodoc]] OlmoeForCausalLM - forward