Skip to content

ALIGN

This model was released on 2021-02-11 and added to Hugging Face Transformers on 2023-03-01.

PyTorch Transformers

ALIGN is pretrained on a noisy 1.8 billion alt‑text and image pair dataset to show that scale can make up for the noise. It uses a dual‑encoder architecture, EfficientNet for images and BERT for text, and a contrastive loss to align similar image–text embeddings together while pushing different embeddings apart. Once trained, ALIGN can encode any image and candidate captions into a shared vector space for zero‑shot retrieval or classification without requiring extra labels. This scale‑first approach reduces dataset curation costs and powers state‑of‑the‑art image–text retrieval and zero‑shot ImageNet classification.

You can find all the original ALIGN checkpoints under the Kakao Brain organization.

The example below demonstrates zero-shot image classification with Pipeline or the AutoModel class.

import torch
from transformers import pipeline
pipeline = pipeline(
task="zero-shot-image-classification",
model="kakaobrain/align-base",
device=0,
dtype=torch.bfloat16
)
candidate_labels = [
"a photo of a dog",
"a photo of a cat",
"a photo of a person"
]
pipeline("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg", candidate_labels=candidate_labels)
import torch
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForZeroShotImageClassification
processor = AutoProcessor.from_pretrained("kakaobrain/align-base")
model = AutoModelForZeroShotImageClassification.from_pretrained("kakaobrain/align-base", device_map="auto")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = requests.get(url, stream=True)
inputs = Image.open(image.raw).convert("RGB")
image_inputs = processor(images=inputs, return_tensors="pt").to(model.device)
with torch.no_grad():
image_embeds = model.get_image_features(**image_inputs)
candidate_labels = ["a photo of a dog", "a photo of a cat", "a photo of a person"]
text_inputs = processor(text=candidate_labels, padding=True, return_tensors="pt").to(model.device)
with torch.no_grad():
text_embeds = model.get_text_features(**text_inputs)
image_embeds = image_embeds / image_embeds.norm(p=2, dim=-1, keepdim=True)
text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True)
logits = (image_embeds @ text_embeds.T) * 100.0
probs = logits.softmax(dim=-1).cpu().squeeze()
for label, score in zip(candidate_labels, probs):
print(f"{label:20s}{score.item():.4f}")
  • ALIGN projects the text and visual features into latent space and the dot product between the projected image and text features is used as the similarity score. The example below demonstrates how to calculate the image-text similarity score with AlignProcessor and AlignModel.

    # Example of using ALIGN for image-text similarity
    from transformers import AlignProcessor, AlignModel
    import torch
    from PIL import Image
    import requests
    from io import BytesIO
    # Load processor and model
    processor = AlignProcessor.from_pretrained("kakaobrain/align-base")
    model = AlignModel.from_pretrained("kakaobrain/align-base")
    # Download image from URL
    url = "https://huggingface.co/roschmid/dog-races/resolve/main/images/Golden_Retriever.jpg"
    response = requests.get(url)
    image = Image.open(BytesIO(response.content)) # Convert the downloaded bytes to a PIL Image
    texts = ["a photo of a cat", "a photo of a dog"]
    # Process image and text inputs
    inputs = processor(images=image, text=texts, return_tensors="pt")
    # Get the embeddings
    with torch.no_grad():
    outputs = model(**inputs)
    image_embeds = outputs.image_embeds
    text_embeds = outputs.text_embeds
    # Normalize embeddings for cosine similarity
    image_embeds = image_embeds / image_embeds.norm(dim=1, keepdim=True)
    text_embeds = text_embeds / text_embeds.norm(dim=1, keepdim=True)
    # Calculate similarity scores
    similarity_scores = torch.matmul(text_embeds, image_embeds.T)
    # Print raw scores
    print("Similarity scores:", similarity_scores)
    # Convert to probabilities
    probs = torch.nn.functional.softmax(similarity_scores, dim=0)
    print("Probabilities:", probs)
    # Get the most similar text
    most_similar_idx = similarity_scores.argmax().item()
    print(f"Most similar text: '{texts[most_similar_idx]}'")

[[autodoc]] AlignConfig

[[autodoc]] AlignTextConfig

[[autodoc]] AlignVisionConfig

[[autodoc]] AlignProcessor

[[autodoc]] AlignModel - forward - get_text_features - get_image_features

[[autodoc]] AlignTextModel - forward

[[autodoc]] AlignVisionModel - forward