BLIP
This model was released on 2022-01-28 and added to Hugging Face Transformers on 2022-12-21.
BLIP (Bootstrapped Language-Image Pretraining) is a vision-language pretraining (VLP) framework designed for both understanding and generation tasks. Most existing pretrained models are only good at one or the other. It uses a captioner to generate captions and a filter to remove the noisy captions. This increases training data quality and more effectively uses the messy web data.
You can find all the original BLIP checkpoints under the BLIP collection.
Click on the BLIP models in the right sidebar for more examples of how to apply BLIP to different vision language tasks.
The example below demonstrates how to visual question answering with Pipeline or the AutoModel class.
import torchfrom transformers import pipeline
pipeline = pipeline( task="visual-question-answering", model="Salesforce/blip-vqa-base", dtype=torch.float16, device=0)url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"pipeline(question="What is the weather in this image?", image=url)import requestsimport torchfrom PIL import Imagefrom transformers import AutoProcessor, AutoModelForVisualQuestionAnswering
processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base")model = AutoModelForVisualQuestionAnswering.from_pretrained( "Salesforce/blip-vqa-base", dtype=torch.float16, device_map="auto")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = Image.open(requests.get(url, stream=True).raw)
question = "What is the weather in this image?"inputs = processor(images=image, text=question, return_tensors="pt").to(model.device, torch.float16)
output = model.generate(**inputs)processor.batch_decode(output, skip_special_tokens=True)[0]Resources
Section titled “Resources”Refer to this notebook to learn how to fine-tune BLIP for image captioning on a custom dataset.
BlipConfig
Section titled “BlipConfig”[[autodoc]] BlipConfig
BlipTextConfig
Section titled “BlipTextConfig”[[autodoc]] BlipTextConfig
BlipVisionConfig
Section titled “BlipVisionConfig”[[autodoc]] BlipVisionConfig
BlipProcessor
Section titled “BlipProcessor”[[autodoc]] BlipProcessor
BlipImageProcessor
Section titled “BlipImageProcessor”[[autodoc]] BlipImageProcessor - preprocess
BlipImageProcessorFast
Section titled “BlipImageProcessorFast”[[autodoc]] BlipImageProcessorFast - preprocess
BlipModel
Section titled “BlipModel”BlipModel is going to be deprecated in future versions, please use BlipForConditionalGeneration, BlipForImageTextRetrieval or BlipForQuestionAnswering depending on your usecase.
[[autodoc]] BlipModel - forward - get_text_features - get_image_features
BlipTextModel
Section titled “BlipTextModel”[[autodoc]] BlipTextModel - forward
BlipTextLMHeadModel
Section titled “BlipTextLMHeadModel”[[autodoc]] BlipTextLMHeadModel - forward
BlipVisionModel
Section titled “BlipVisionModel”[[autodoc]] BlipVisionModel - forward
BlipForConditionalGeneration
Section titled “BlipForConditionalGeneration”[[autodoc]] BlipForConditionalGeneration - forward
BlipForImageTextRetrieval
Section titled “BlipForImageTextRetrieval”[[autodoc]] BlipForImageTextRetrieval - forward
BlipForQuestionAnswering
Section titled “BlipForQuestionAnswering”[[autodoc]] BlipForQuestionAnswering - forward