Qwen2.5-VL
This model was released on 2025-02-19 and added to Hugging Face Transformers on 2025-01-23.
Qwen2.5-VL
Section titled “Qwen2.5-VL”Qwen2.5-VL is a multimodal vision-language model, available in 3B, 7B, and 72B parameters, pretrained on 4.1T tokens. The model introduces window attention in the ViT encoder to accelerate training and inference, dynamic FPS sampling on the spatial and temporal dimensions for better video understanding across different sampling rates, and an upgraded MRoPE (multi-resolutional rotary positional encoding) mechanism to better capture and learn temporal dynamics.
You can find all the original Qwen2.5-VL checkpoints under the Qwen2.5-VL collection.
The example below demonstrates how to generate text based on an image with Pipeline or the AutoModel class.
import torchfrom transformers import pipelinepipe = pipeline( task="image-text-to-text", model="Qwen/Qwen2.5-VL-7B-Instruct", device=0, dtype=torch.bfloat16)messages = [ { "role": "user", "content": [ { "type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg", }, { "type": "text", "text": "Describe this image."}, ] }]pipe(text=messages,max_new_tokens=20, return_full_text=False)import torchfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-VL-7B-Instruct", dtype=torch.float16, device_map="auto", attn_implementation="sdpa")processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")messages = [ { "role":"user", "content":[ { "type":"image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" }, { "type":"text", "text":"Describe this image." } ] }
]
inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128)generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)print(output_text)Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to int4.
import torchfrom transformers import TorchAoConfig, Qwen2_5_VLForConditionalGeneration, AutoProcessor
quantization_config = TorchAoConfig("int4_weight_only", group_size=128)model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-VL-7B-Instruct", dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config)-
Use Qwen2.5-VL for video inputs by setting
"type": "video"as shown below.conversation = [{"role": "user","content": [{"type": "video", "path": "/path/to/video.mp4"},{"type": "text", "text": "What happened in the video?"},],}]inputs = processor.apply_chat_template(conversation,fps=1,add_generation_prompt=True,tokenize=True,return_dict=True,return_tensors="pt").to(model.device)# Inference: Generation of the outputoutput_ids = model.generate(**inputs, max_new_tokens=128)generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)print(output_text) -
Use Qwen2.5-VL for a mixed batch of inputs (images, videos, text). Add labels when handling multiple images or videos for better reference as show below.
import torchfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessormodel = Qwen2_5_VLForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct",dtype=torch.float16,device_map="auto",attn_implementation="sdpa")processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")conversation = [{"role": "user","content": [{"type": "image"},{"type": "text", "text": "Hello, how are you?"}]},{"role": "assistant","content": "I'm doing well, thank you for asking. How can I assist you today?"},{"role": "user","content": [{"type": "text", "text": "Can you describe these images and video?"},{"type": "image"},{"type": "image"},{"type": "video"},{"type": "text", "text": "These are from my vacation."}]},{"role": "assistant","content": "I'd be happy to describe the images and video for you. Could you please provide more context about your vacation?"},{"role": "user","content": "It was a trip to the mountains. Can you see the details in the images and video?"}]# default:prompt_without_id = processor.apply_chat_template(conversation, add_generation_prompt=True)# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?<|vision_start|><|image_pad|><|vision_end|><|vision_start|><|image_pad|><|vision_end|><|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n'# add idsprompt_with_id = processor.apply_chat_template(conversation, add_generation_prompt=True, add_vision_id=True)# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nPicture 1: <|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?Picture 2: <|vision_start|><|image_pad|><|vision_end|>Picture 3: <|vision_start|><|image_pad|><|vision_end|>Video 1: <|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n' -
Use the
min_pixelsandmax_pixelsparameters inAutoProcessorto set the resolution.min_pixels = 224*224max_pixels = 2048*2048processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)Higher resolution can require more compute whereas reducing the resolution can save memory as follows:
min_pixels = 256*28*28max_pixels = 1024*28*28processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
Qwen2_5_VLConfig
Section titled “Qwen2_5_VLConfig”[[autodoc]] Qwen2_5_VLConfig
Qwen2_5_VLTextConfig
Section titled “Qwen2_5_VLTextConfig”[[autodoc]] Qwen2_5_VLTextConfig
Qwen2_5_VLProcessor
Section titled “Qwen2_5_VLProcessor”[[autodoc]] Qwen2_5_VLProcessor
Qwen2_5_VLTextModel
Section titled “Qwen2_5_VLTextModel”[[autodoc]] Qwen2_5_VLTextModel - forward
Qwen2_5_VLModel
Section titled “Qwen2_5_VLModel”[[autodoc]] Qwen2_5_VLModel - forward
Qwen2_5_VLForConditionalGeneration
Section titled “Qwen2_5_VLForConditionalGeneration”[[autodoc]] Qwen2_5_VLForConditionalGeneration - forward