Skip to content

AltCLIP

This model was released on 2022-11-12 and added to Hugging Face Transformers on 2023-01-04.

PyTorch

AltCLIP replaces the CLIP text encoder with a multilingual XLM-R encoder and aligns image and text representations with teacher learning and contrastive learning.

You can find all the original AltCLIP checkpoints under the AltClip collection.

The examples below demonstrates how to calculate similarity scores between an image and one or more captions with the AutoModel class.

import torch
import requests
from PIL import Image
from transformers import AltCLIPModel, AltCLIPProcessor
model = AltCLIPModel.from_pretrained("BAAI/AltCLIP", dtype=torch.bfloat16)
processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
labels = ["a photo of a cat", "a photo of a dog"]
for label, prob in zip(labels, probs[0]):
print(f"{label}: {prob.item():.4f}")

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses torchao to only quantize the weights to int4.

# !pip install torchao
import torch
import requests
from PIL import Image
from transformers import AltCLIPModel, AltCLIPProcessor, TorchAoConfig
model = AltCLIPModel.from_pretrained(
"BAAI/AltCLIP",
quantization_config=TorchAoConfig("int4_weight_only", group_size=128),
dtype=torch.bfloat16,
)
processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
labels = ["a photo of a cat", "a photo of a dog"]
for label, prob in zip(labels, probs[0]):
print(f"{label}: {prob.item():.4f}")
  • AltCLIP uses bidirectional attention instead of causal attention and it uses the [CLS] token in XLM-R to represent a text embedding.
  • Use CLIPImageProcessor to resize (or rescale) and normalize images for the model.
  • AltCLIPProcessor combines CLIPImageProcessor and XLMRobertaTokenizer into a single instance to encode text and prepare images.

[[autodoc]] AltCLIPConfig

[[autodoc]] AltCLIPTextConfig

[[autodoc]] AltCLIPVisionConfig

[[autodoc]] AltCLIPModel

[[autodoc]] AltCLIPTextModel

[[autodoc]] AltCLIPVisionModel

[[autodoc]] AltCLIPProcessor