DINOv3
This model was released on 2025-08-13 and added to Hugging Face Transformers on 2025-08-14.
DINOv3
Section titled “DINOv3”DINOv3 is a family of versatile vision foundation models that outperforms the specialized state of the art across a broad range of settings, without fine-tuning. DINOv3 produces high-quality dense features that achieve outstanding performance on various vision tasks, significantly surpassing previous self- and weakly-supervised foundation models.
You can find all the original DINOv3 checkpoints under the DINOv3 collection.
The example below demonstrates how to obtain an image embedding with Pipeline or the AutoModel class.
import torchfrom transformers import pipeline
pipe = pipeline( task="image-feature-extraction", model="facebook/dinov3-vits16-pretrain-lvd1689m", dtype=torch.bfloat16,)
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")import torchfrom transformers import AutoImageProcessor, AutoModelfrom transformers.image_utils import load_image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"image = load_image(url)
processor = AutoImageProcessor.from_pretrained("facebook/dinov3-vits16-pretrain-lvd1689m")model = AutoModel.from_pretrained( "facebook/dinov3-vits16-pretrain-lvd1689m", dtype=torch.float16, device_map="auto", attn_implementation="sdpa")
inputs = processor(images=image, return_tensors="pt").to(model.device)with torch.inference_mode(): outputs = model(**inputs)
pooled_output = outputs.pooler_outputprint("Pooled output shape:", pooled_output.shape)Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to int4.
# pip install torchaoimport torchfrom transformers import TorchAoConfig, AutoImageProcessor, AutoModelfrom torchao.quantization import Int4WeightOnlyConfigfrom transformers.image_utils import load_image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"image = load_image(url)
processor = AutoImageProcessor.from_pretrained("facebook/dinov3-vitsplus-pretrain-lvd1689m")
quant_type = Int4WeightOnlyConfig(group_size=128)quantization_config = TorchAoConfig(quant_type=quant_type)
model = AutoModel.from_pretrained( "facebook/dinov3-vit7b16-pretrain-lvd1689m", dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config)
inputs = processor(images=image, return_tensors="pt").to(model.device)with torch.inference_mode(): outputs = model(**inputs)
pooled_output = outputs.pooler_outputprint("Pooled output shape:", pooled_output.shape)-
The example below shows how to split the output tensor into:
- one embedding for the whole image, commonly referred to as a
CLStoken, useful for classification and retrieval - register tokens - learnable embeddings that act as dedicated “memory slots” for global information, they reduce high-norm artifacts in patch tokens, yielding cleaner attention maps and better performance on dense prediction tasks.
- a set of local embeddings, one for each
16x16patch of the input image, useful for dense tasks, such as semantic segmentation
import torchfrom transformers import AutoImageProcessor, AutoModelfrom transformers.image_utils import load_imageurl = "http://images.cocodataset.org/val2017/000000039769.jpg"image = load_image(url)print("Image size:", image.height, image.width) # [480, 640]processor = AutoImageProcessor.from_pretrained("facebook/dinov3-vits16-pretrain-lvd1689m")model = AutoModel.from_pretrained("facebook/dinov3-vits16-pretrain-lvd1689m")patch_size = model.config.patch_sizeprint("Patch size:", patch_size) # 16print("Num register tokens:", model.config.num_register_tokens) # 4inputs = processor(images=image, return_tensors="pt")print("Preprocessed image size:", inputs.pixel_values.shape) # [1, 3, 224, 224]batch_size, _, img_height, img_width = inputs.pixel_values.shapenum_patches_height, num_patches_width = img_height // patch_size, img_width // patch_sizenum_patches_flat = num_patches_height * num_patches_widthwith torch.inference_mode():outputs = model(**inputs)last_hidden_states = outputs.last_hidden_stateprint(last_hidden_states.shape) # [1, 1 + 4 + 256, 384]assert last_hidden_states.shape == (batch_size, 1 + model.config.num_register_tokens + num_patches_flat, model.config.hidden_size)cls_token = last_hidden_states[:, 0, :]patch_features_flat = last_hidden_states[:, 1 + model.config.num_register_tokens:, :]patch_features = patch_features_flat.unflatten(1, (num_patches_height, num_patches_width)) - one embedding for the whole image, commonly referred to as a
DINOv3ViTConfig
Section titled “DINOv3ViTConfig”[[autodoc]] DINOv3ViTConfig
DINOv3ConvNextConfig
Section titled “DINOv3ConvNextConfig”[[autodoc]] DINOv3ConvNextConfig
DINOv3ViTModel
Section titled “DINOv3ViTModel”[[autodoc]] DINOv3ViTModel - forward
DINOv3ViTBackbone
Section titled “DINOv3ViTBackbone”[[autodoc]] DINOv3ViTBackbone
DINOv3ConvNextModel
Section titled “DINOv3ConvNextModel”[[autodoc]] DINOv3ConvNextModel - forward
DINOv3ViTImageProcessorFast
Section titled “DINOv3ViTImageProcessorFast”[[autodoc]] DINOv3ViTImageProcessorFast - preprocess
DINOv3ConvNextBackbone
Section titled “DINOv3ConvNextBackbone”[[autodoc]] DINOv3ConvNextBackbone - forward