OLMo3
This model was released on {release_date} and added to Hugging Face Transformers on 2025-09-16.
Olmo3 is an improvement on OLMo2. More details will be released on soon.
The example below demonstrates how to generate text with Pipeline, AutoModel and from the command line.
import torchfrom transformers import pipeline
pipe = pipeline( task="text-generation", model="allenai/TBA", dtype=torch.bfloat16, device=0,)
result = pipe("Plants create energy through a process known as")print(result)import torchfrom transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained( "allenai/TBA")
model = AutoModelForCausalLM.from_pretrained( "allenai/TBA", dtype=torch.bfloat16, device_map="auto", attn_implementation="sdpa")input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_length=50, cache_implementation="static")print(tokenizer.decode(output[0], skip_special_tokens=True))echo -e "Plants create energy through a process known as" | transformers run --task text-generation --model allenai/TBA --device 0Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to 4-bits.
#pip install torchaoimport torchfrom transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
torchao_config = TorchAoConfig( "int4_weight_only", group_size=128)
tokenizer = AutoTokenizer.from_pretrained( "allenai/TBA")
model = AutoModelForCausalLM.from_pretrained( "allenai/TBA", quantization_config=torchao_config, dtype=torch.bfloat16, device_map="auto", attn_implementation="sdpa")input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_length=50, cache_implementation="static")print(tokenizer.decode(output[0], skip_special_tokens=True))-
Load specific intermediate checkpoints by adding the
revisionparameter tofrom_pretrained.from transformers import AutoModelForCausalLMmodel = AutoModelForCausalLM.from_pretrained("allenai/TBA", revision="stage1-step140000-tokens294B")
Olmo3Config
Section titled “Olmo3Config”[[autodoc]] Olmo3Config
Olmo3ForCausalLM
Section titled “Olmo3ForCausalLM”[[autodoc]] Olmo3ForCausalLM
Olmo3Model
Section titled “Olmo3Model”[[autodoc]] Olmo3Model - forward
Olmo3PreTrainedModel
Section titled “Olmo3PreTrainedModel”[[autodoc]] Olmo3PreTrainedModel - forward