Skip to content

OLMo

This model was released on 2024-02-01 and added to Hugging Face Transformers on 2024-04-17.

PyTorch FlashAttention SDPA Tensor parallelism

OLMo is a 7B-parameter dense language model. It uses SwiGLU activations, non-parametric layer normalization, rotary positional embeddings, and a BPE tokenizer that masks personally identifiable information. It is pretrained on Dolma, a 3T-token dataset. OLMo was released to provide complete transparency of not just the model weights but the training data, training code, and evaluation code to enable more research on language models.

You can find all the original OLMo checkpoints under the OLMo collection.

Click on the OLMo models in the right sidebar for more examples of how to apply OLMo to different language tasks.

The example below demonstrates how to generate text with Pipeline or the AutoModel class.

import torch
from transformers import pipeline
pipe = pipeline(
task="text-generation",
model="allenai/OLMo-7B-hf",
dtype=torch.float16,
device=0,
)
result = pipe("Plants create energy through a process known as")
print(result)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"allenai/OLMo-7B-hf"
)
model = AutoModelForCausalLM.from_pretrained(
"allenai/OLMo-7B-hf",
dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_length=50, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))
Terminal window
echo -e "Plants create energy through a process known as" | transformers run --task text-generation --model allenai/OLMo-7B-hf --device 0

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses bitsandbytes to only quantize the weights to 4-bits.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4"
)
model = AutoModelForCausalLM.from_pretrained(
"allenai/OLMo-7B-hf",
attn_implementation="sdpa",
dtype=torch.float16,
device_map="auto",
quantization_config=quantization_config
)
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-hf")
inputs = tokenizer("Bitcoin is", return_tensors="pt")
inputs = {k: v.to(model.device) for k, v in inputs.items()}
output = model.generate(**inputs, max_length=64)
print(tokenizer.decode(output[0]))

[[autodoc]] OlmoConfig

[[autodoc]] OlmoModel - forward

[[autodoc]] OlmoForCausalLM - forward