Skip to content

ViTMAE

This model was released on 2021-11-11 and added to Hugging Face Transformers on 2022-01-18.

PyTorch FlashAttention SDPA

ViTMAE is a self-supervised vision model that is pretrained by masking large portions of an image (~75%). An encoder processes the visible image patches and a decoder reconstructs the missing pixels from the encoded patches and mask tokens. After pretraining, the encoder can be reused for downstream tasks like image classification or object detection — often outperforming models trained with supervised learning.

drawing

You can find all the original ViTMAE checkpoints under the AI at Meta organization.

The example below demonstrates how to reconstruct the missing pixels with the ViTMAEForPreTraining class.

import torch
import requests
from PIL import Image
from transformers import ViTImageProcessor, ViTMAEForPreTraining
from accelerate import Accelerator
device = Accelerator().device
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
processor = ViTImageProcessor.from_pretrained("facebook/vit-mae-base")
inputs = processor(image, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}
model = ViTMAEForPreTraining.from_pretrained("facebook/vit-mae-base", attn_implementation="sdpa").to(device)
with torch.no_grad():
outputs = model(**inputs)
reconstruction = outputs.logits
  • ViTMAE is typically used in two stages. Self-supervised pretraining with ViTMAEForPreTraining, and then discarding the decoder and fine-tuning the encoder. After fine-tuning, the weights can be plugged into a model like ViTForImageClassification.
  • Use ViTImageProcessor for input preparation.
  • Refer to this notebook to learn how to visualize the reconstructed pixels from ViTMAEForPreTraining.

[[autodoc]] ViTMAEConfig

[[autodoc]] ViTMAEModel - forward

[[autodoc]] transformers.ViTMAEForPreTraining - forward