ALIGN
This model was released on 2021-02-11 and added to Hugging Face Transformers on 2023-03-01.
ALIGN is pretrained on a noisy 1.8 billion alt‑text and image pair dataset to show that scale can make up for the noise. It uses a dual‑encoder architecture, EfficientNet for images and BERT for text, and a contrastive loss to align similar image–text embeddings together while pushing different embeddings apart. Once trained, ALIGN can encode any image and candidate captions into a shared vector space for zero‑shot retrieval or classification without requiring extra labels. This scale‑first approach reduces dataset curation costs and powers state‑of‑the‑art image–text retrieval and zero‑shot ImageNet classification.
You can find all the original ALIGN checkpoints under the Kakao Brain organization.
The example below demonstrates zero-shot image classification with Pipeline or the AutoModel class.
import torchfrom transformers import pipeline
pipeline = pipeline( task="zero-shot-image-classification", model="kakaobrain/align-base", device=0, dtype=torch.bfloat16)
candidate_labels = [ "a photo of a dog", "a photo of a cat", "a photo of a person"]
pipeline("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg", candidate_labels=candidate_labels)import torchimport requestsfrom PIL import Imagefrom transformers import AutoProcessor, AutoModelForZeroShotImageClassification
processor = AutoProcessor.from_pretrained("kakaobrain/align-base")model = AutoModelForZeroShotImageClassification.from_pretrained("kakaobrain/align-base", device_map="auto")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = requests.get(url, stream=True)inputs = Image.open(image.raw).convert("RGB")
image_inputs = processor(images=inputs, return_tensors="pt").to(model.device)with torch.no_grad(): image_embeds = model.get_image_features(**image_inputs)
candidate_labels = ["a photo of a dog", "a photo of a cat", "a photo of a person"]text_inputs = processor(text=candidate_labels, padding=True, return_tensors="pt").to(model.device)with torch.no_grad(): text_embeds = model.get_text_features(**text_inputs)
image_embeds = image_embeds / image_embeds.norm(p=2, dim=-1, keepdim=True)text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True)
logits = (image_embeds @ text_embeds.T) * 100.0probs = logits.softmax(dim=-1).cpu().squeeze()
for label, score in zip(candidate_labels, probs): print(f"{label:20s} → {score.item():.4f}")-
ALIGN projects the text and visual features into latent space and the dot product between the projected image and text features is used as the similarity score. The example below demonstrates how to calculate the image-text similarity score with
AlignProcessorandAlignModel.# Example of using ALIGN for image-text similarityfrom transformers import AlignProcessor, AlignModelimport torchfrom PIL import Imageimport requestsfrom io import BytesIO# Load processor and modelprocessor = AlignProcessor.from_pretrained("kakaobrain/align-base")model = AlignModel.from_pretrained("kakaobrain/align-base")# Download image from URLurl = "https://huggingface.co/roschmid/dog-races/resolve/main/images/Golden_Retriever.jpg"response = requests.get(url)image = Image.open(BytesIO(response.content)) # Convert the downloaded bytes to a PIL Imagetexts = ["a photo of a cat", "a photo of a dog"]# Process image and text inputsinputs = processor(images=image, text=texts, return_tensors="pt")# Get the embeddingswith torch.no_grad():outputs = model(**inputs)image_embeds = outputs.image_embedstext_embeds = outputs.text_embeds# Normalize embeddings for cosine similarityimage_embeds = image_embeds / image_embeds.norm(dim=1, keepdim=True)text_embeds = text_embeds / text_embeds.norm(dim=1, keepdim=True)# Calculate similarity scoressimilarity_scores = torch.matmul(text_embeds, image_embeds.T)# Print raw scoresprint("Similarity scores:", similarity_scores)# Convert to probabilitiesprobs = torch.nn.functional.softmax(similarity_scores, dim=0)print("Probabilities:", probs)# Get the most similar textmost_similar_idx = similarity_scores.argmax().item()print(f"Most similar text: '{texts[most_similar_idx]}'")
Resources
Section titled “Resources”- Refer to the Kakao Brain’s Open Source ViT, ALIGN, and the New COYO Text-Image Dataset blog post for more details.
AlignConfig
Section titled “AlignConfig”[[autodoc]] AlignConfig
AlignTextConfig
Section titled “AlignTextConfig”[[autodoc]] AlignTextConfig
AlignVisionConfig
Section titled “AlignVisionConfig”[[autodoc]] AlignVisionConfig
AlignProcessor
Section titled “AlignProcessor”[[autodoc]] AlignProcessor
AlignModel
Section titled “AlignModel”[[autodoc]] AlignModel - forward - get_text_features - get_image_features
AlignTextModel
Section titled “AlignTextModel”[[autodoc]] AlignTextModel - forward
AlignVisionModel
Section titled “AlignVisionModel”[[autodoc]] AlignVisionModel - forward