Pixio
This model was released on {release_date} and added to Hugging Face Transformers on 2025-12-16. This model is to be announced
Pixio is a vision foundation model that uses ViT as a feature extractor for multiple downstream tasks like depth estimation, semantic segmentation, feed-forward 3D reconstruction, robotics, and image classification. It is built on the Masked Autoencoder (MAE) pre-training framework, with four minimal yet critical updates: 1) deeper decoder, 2) larger masking granularity, 3) more class tokens, and 4) web-scale curated training data.
You can find all the original Pixio checkpoints under the Pixio collection.
The example below demonstrates how to obtain an image embedding with the AutoModel class.
import requestsfrom transformers import AutoImageProcessor, AutoModelfrom PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("facebook/pixio-vith16")model = AutoModel.from_pretrained("facebook/pixio-vith16")
inputs = processor(images=image, return_tensors="pt")outputs = model(**inputs)features_norm = outputs.last_hidden_state # class tokens + patch tokens after last LayerNormfeatures = outputs.hidden_states[-1] # class tokens + patch tokens before last LayerNorm-
The example below shows how to split the output tensor into:
- a set of global embeddings for the whole image, commonly referred to as
CLStoken, useful for classification and retrieval. You can either average them (recommended) or concatenate them along the channel dimension. - a set of local embeddings, one for each
16x16patch of the input image, useful for dense tasks, such as depth estimation and semantic segmentation.
from transformers import AutoImageProcessor, AutoModelfrom PIL import Imageimport requestsurl = 'http://images.cocodataset.org/val2017/000000039769.jpg'image = Image.open(requests.get(url, stream=True).raw)print(image.height, image.width) # [480, 640]processor = AutoImageProcessor.from_pretrained('facebook/pixio-vith16')model = AutoModel.from_pretrained('facebook/pixio-vith16')patch_size = model.config.patch_sizeinputs = processor(images=image, return_tensors="pt")print(inputs.pixel_values.shape) # [1, 3, 256, 256]batch_size, rgb, img_height, img_width = inputs.pixel_values.shapenum_patches_height, num_patches_width = img_height // patch_size, img_width // patch_sizenum_patches_flat = num_patches_height * num_patches_widthoutputs = model(**inputs)last_hidden_states = outputs.last_hidden_stateprint(last_hidden_states.shape) # [1, 8 + 256, 1280]assert last_hidden_states.shape == (batch_size, model.config.n_cls_tokens + num_patches_flat, model.config.hidden_size)cls_tokens = last_hidden_states[:, :model.config.n_cls_tokens, :]patch_features = last_hidden_states[:, model.config.n_cls_tokens:, :].unflatten(1, (num_patches_height, num_patches_width)) - a set of global embeddings for the whole image, commonly referred to as
-
Use torch.compile to speedup inference.
import torchfrom transformers import AutoImageProcessor, AutoModelfrom PIL import Imageimport requestsurl = 'http://images.cocodataset.org/val2017/000000039769.jpg'image = Image.open(requests.get(url, stream=True).raw)processor = AutoImageProcessor.from_pretrained('facebook/pixio-vith16')model = AutoModel.from_pretrained('facebook/pixio-vith16')compiled_model = torch.compile(model)inputs = processor(images=image, return_tensors="pt")outputs = compiled_model(**inputs)last_hidden_states = outputs.last_hidden_state
PixioConfig
Section titled “PixioConfig”[[autodoc]] PixioConfig
PixioModel
Section titled “PixioModel”[[autodoc]] PixioModel - forward
PixioBackbone
Section titled “PixioBackbone”[[autodoc]] PixioBackbone - forward