Swin Transformer
This model was released on 2021-03-25 and added to Hugging Face Transformers on 2022-01-21.
Swin Transformer
Section titled “Swin Transformer”Swin Transformer is a hierarchical vision transformer. Images are processed in patches and windowed self-attention is used to capture local information. These windows are shifted across the image to allow for cross-window connections, capturing global information more efficiently. This hierarchical approach with shifted windows allows the Swin Transformer to process images effectively at different scales and achieve linear computational complexity relative to image size, making it a versatile backbone for various vision tasks like image classification and object detection.
You can find all official Swin Transformer checkpoints under the Microsoft organization.
The example below demonstrates how to classify an image with Pipeline or the AutoModel class.
import torchfrom transformers import pipeline
pipeline = pipeline( task="image-classification", model="microsoft/swin-tiny-patch4-window7-224", dtype=torch.float16, device=0)pipeline("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")import torchimport requestsfrom PIL import Imagefrom transformers import AutoModelForImageClassification, AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained( "microsoft/swin-tiny-patch4-window7-224", use_fast=True,)model = AutoModelForImageClassification.from_pretrained( "microsoft/swin-tiny-patch4-window7-224", device_map="auto")
device = Accelerator().deviceurl = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = Image.open(requests.get(url, stream=True).raw)inputs = image_processor(image, return_tensors="pt").to(model.device)
with torch.no_grad(): logits = model(**inputs).logitspredicted_class_id = logits.argmax(dim=-1).item()
class_labels = model.config.id2labelpredicted_class_label = class_labels[predicted_class_id]print(f"The predicted class label is: {predicted_class_label}")- Swin can pad the inputs for any input height and width divisible by
32. - Swin can be used as a backbone. When
output_hidden_states = True, it outputs bothhidden_statesandreshaped_hidden_states. Thereshaped_hidden_stateshave a shape of(batch, num_channels, height, width)rather than(batch_size, sequence_length, num_channels).
SwinConfig
Section titled “SwinConfig”[[autodoc]] SwinConfig
SwinModel
Section titled “SwinModel”[[autodoc]] SwinModel - forward
SwinForMaskedImageModeling
Section titled “SwinForMaskedImageModeling”[[autodoc]] SwinForMaskedImageModeling - forward
SwinForImageClassification
Section titled “SwinForImageClassification”[[autodoc]] transformers.SwinForImageClassification - forward