ViTMAE
This model was released on 2021-11-11 and added to Hugging Face Transformers on 2022-01-18.
ViTMAE
Section titled “ViTMAE”ViTMAE is a self-supervised vision model that is pretrained by masking large portions of an image (~75%). An encoder processes the visible image patches and a decoder reconstructs the missing pixels from the encoded patches and mask tokens. After pretraining, the encoder can be reused for downstream tasks like image classification or object detection — often outperforming models trained with supervised learning.

You can find all the original ViTMAE checkpoints under the AI at Meta organization.
The example below demonstrates how to reconstruct the missing pixels with the ViTMAEForPreTraining class.
import torchimport requestsfrom PIL import Imagefrom transformers import ViTImageProcessor, ViTMAEForPreTrainingfrom accelerate import Accelerator
device = Accelerator().device
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = Image.open(requests.get(url, stream=True).raw)
processor = ViTImageProcessor.from_pretrained("facebook/vit-mae-base")inputs = processor(image, return_tensors="pt")inputs = {k: v.to(device) for k, v in inputs.items()}
model = ViTMAEForPreTraining.from_pretrained("facebook/vit-mae-base", attn_implementation="sdpa").to(device)with torch.no_grad(): outputs = model(**inputs)
reconstruction = outputs.logits- ViTMAE is typically used in two stages. Self-supervised pretraining with
ViTMAEForPreTraining, and then discarding the decoder and fine-tuning the encoder. After fine-tuning, the weights can be plugged into a model likeViTForImageClassification. - Use
ViTImageProcessorfor input preparation.
Resources
Section titled “Resources”- Refer to this notebook to learn how to visualize the reconstructed pixels from
ViTMAEForPreTraining.
ViTMAEConfig
Section titled “ViTMAEConfig”[[autodoc]] ViTMAEConfig
ViTMAEModel
Section titled “ViTMAEModel”[[autodoc]] ViTMAEModel - forward
ViTMAEForPreTraining
Section titled “ViTMAEForPreTraining”[[autodoc]] transformers.ViTMAEForPreTraining - forward