Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add qwen2.5vl #35569

Merged
merged 46 commits into from
Jan 23, 2025
Merged
Show file tree
Hide file tree
Changes from 45 commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
e693210
add qwen2.5vl
ShuaiBai623 Jan 8, 2025
8a713a9
fix
ShuaiBai623 Jan 8, 2025
be1f811
pass check table
ShuaiBai623 Jan 8, 2025
a666eb0
add modular file
ShuaiBai623 Jan 14, 2025
a96b32d
fix style
ShuaiBai623 Jan 14, 2025
8191ce9
Update src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py
ShuaiBai623 Jan 14, 2025
2b70136
Update src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py
ShuaiBai623 Jan 14, 2025
c23055b
Update src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py
ShuaiBai623 Jan 14, 2025
b1e2f68
padd copy check
ShuaiBai623 Jan 14, 2025
9d0e553
use modular
ShuaiBai623 Jan 17, 2025
23e6635
fix
ShuaiBai623 Jan 17, 2025
aa0dc95
fix
ShuaiBai623 Jan 17, 2025
05a1ab1
fix
ShuaiBai623 Jan 17, 2025
00b05f2
update flashatt2&sdpa support_list
ShuaiBai623 Jan 17, 2025
47eb069
Update docs/source/en/_toctree.yml
ShuaiBai623 Jan 18, 2025
35b9dbb
Update docs/source/en/model_doc/qwen2_5_vl.md
ShuaiBai623 Jan 18, 2025
67a10da
Update docs/source/en/model_doc/qwen2_5_vl.md
ShuaiBai623 Jan 18, 2025
e37ce58
Update docs/source/en/model_doc/qwen2_5_vl.md
ShuaiBai623 Jan 18, 2025
fcbb355
Update docs/source/en/model_doc/qwen2_5_vl.md
ShuaiBai623 Jan 18, 2025
7f3c66e
Update src/transformers/models/qwen2_5_vl/modular_qwen2_5_vl.py
ShuaiBai623 Jan 18, 2025
f5a4c2b
update config
ShuaiBai623 Jan 18, 2025
00f9832
Merge branch 'main' into add_qwen2.5vl
ShuaiBai623 Jan 18, 2025
f57827d
update
ShuaiBai623 Jan 18, 2025
b21bd62
fix hf path
ShuaiBai623 Jan 18, 2025
18ea4be
rename Qwen2_5_VLVideosKwargs
gewenbin0992 Jan 20, 2025
faae9f4
fix
gewenbin0992 Jan 20, 2025
cd3fd21
fix
gewenbin0992 Jan 20, 2025
f8c465c
update
gewenbin0992 Jan 21, 2025
84247a4
excuted modular
gewenbin0992 Jan 21, 2025
0cd1f62
rollback init
gewenbin0992 Jan 21, 2025
446383f
fix
gewenbin0992 Jan 21, 2025
f87160d
formated
gewenbin0992 Jan 21, 2025
4012f4a
simpler init
gewenbin0992 Jan 21, 2025
d671be8
fix
gewenbin0992 Jan 21, 2025
ad46b87
fix
gewenbin0992 Jan 21, 2025
8390d99
fix
gewenbin0992 Jan 21, 2025
16913ab
fix
gewenbin0992 Jan 21, 2025
72de302
fix
gewenbin0992 Jan 21, 2025
91a00e3
update docs
gewenbin0992 Jan 21, 2025
ca2e16c
fix
gewenbin0992 Jan 22, 2025
64db9c7
fix
gewenbin0992 Jan 22, 2025
bb0c459
Merge branch 'huggingface:main' into add_qwen2.5vl
gewenbin0992 Jan 22, 2025
df50ed5
Merge branch 'main' into add_qwen2.5vl
gewenbin0992 Jan 22, 2025
c84216d
update Qwen2VLRotaryEmbedding for yarn
gewenbin0992 Jan 23, 2025
87882db
Merge branch 'main' into add_qwen2.5vl
gewenbin0992 Jan 23, 2025
a0b3a94
fix
gewenbin0992 Jan 23, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -928,6 +928,8 @@
title: Pix2Struct
- local: model_doc/pixtral
title: Pixtral
- local: model_doc/qwen2_5_vl
title: Qwen2.5-VL
- local: model_doc/qwen2_audio
title: Qwen2Audio
- local: model_doc/qwen2_vl
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -285,6 +285,7 @@ Flax), PyTorch, and/or TensorFlow.
| [PVTv2](model_doc/pvt_v2) ||||
| [QDQBert](model_doc/qdqbert) ||||
| [Qwen2](model_doc/qwen2) ||||
| [Qwen2_5_VL](model_doc/qwen2_5_vl) ||||
| [Qwen2Audio](model_doc/qwen2_audio) ||||
| [Qwen2MoE](model_doc/qwen2_moe) ||||
| [Qwen2VL](model_doc/qwen2_vl) ||||
Expand Down
300 changes: 300 additions & 0 deletions docs/source/en/model_doc/qwen2_5_vl.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,300 @@
<!--Copyright 2025 The Qwen Team and The HuggingFace Inc. team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->

# Qwen2.5-VL

## Overview

The [Qwen2.5-VL](https://qwenlm.github.io/blog/qwen2_5-vl/) model is an update to [Qwen2-VL](https://arxiv.org/abs/2409.12191) from Qwen team, Alibaba Group.

The abstract from this update is the following:

*Qwen2.5-VL marks a major step forward from Qwen2-VL, built upon the latest Qwen2.5 LLM. We've accelerated training and testing through the strategic implementation of window attention within the ViT. The ViT architecture itself has been refined with SwiGLU and RMSNorm, aligning it more closely with the LLM's structure. A key innovation is the expansion of native dynamic resolution to encompass the temporal dimension, in addition to spatial aspects. Furthermore, we've upgraded MRoPE, incorporating absolute time alignment on the time axis to allow the model to effectively capture temporal dynamics, regardless of frame rate, leading to superior video understanding.*
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

love that it gets straight to the differences! 🤗


## Usage example

### Single Media inference

The model can accept both images and videos as input. Here's an example code for inference.

```python

from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers.image_utils import load_images, load_video
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor

# Load the model in half-precision on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", device_map="auto")
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")

# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)

conversation = [
{
"role":"user",
"content":[
{
"type":"image",
},
{
"type":"text",
"text":"Describe this image."
}
]
}
]


# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'

inputs = processor(text=[text_prompt], images=[image], padding=True, return_tensors="pt")
inputs = inputs.to('cuda')

# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)

# Video
video = load_video(video="/path/to/video.mp4")
conversation = [
{
"role": "user",
"content": [
{"type": "video"},
{"type": "text", "text": "What happened in the video?"},
],
}
]

# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|video_pad|><|vision_end|>What happened in the video?<|im_end|>\n<|im_start|>assistant\n'

# Qwen2.5VL modifies the time positional encoding (MRoPE) according to the video's frame rate (FPS).
# Therefore, the video's FPS information needs to be provided as input.
inputs = processor(text=[text_prompt], videos=[video], fps=[1.0], padding=True, return_tensors="pt")
inputs = inputs.to('cuda')

# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
```

### Batch Mixed Media Inference

The model can batch inputs composed of mixed samples of various types such as images, videos, and text. Here is an example.

```python
images = load_images([
"/path/to/image1.jpg",
"/path/to/image2.jpg",
"/path/to/image3.jpg",
"/path/to/image4.jpg",
"/path/to/image5.jpg",
])
video = load_video(video="/path/to/video.mp4")

# Conversation for the first image
conversation1 = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Describe this image."}
]
}
]

# Conversation with two images
conversation2 = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "image"},
{"type": "text", "text": "What is written in the pictures?"}
]
}
]

# Conversation with pure text
conversation3 = [
{
"role": "user",
"content": "who are you?"
}
]


# Conversation with mixed midia
conversation4 = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "image"},
{"type": "video"},
{"type": "text", "text": "What are the common elements in these medias?"},
],
}
]

conversations = [conversation1, conversation2, conversation3, conversation4]
# Preparation for batch inference
texts = [processor.apply_chat_template(msg, add_generation_prompt=True) for msg in conversations]
inputs = processor(
text=texts,
images=images,
videos=[video],
padding=True,
return_tensors="pt",
)
inputs = inputs.to('cuda')

# Batch Inference
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
```

### Usage Tips

#### Image Resolution trade-off

The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs.

```python
min_pixels = 224*224
max_pixels = 2048*2048
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
```

In case of limited GPU RAM, one can reduce the resolution as follows:

```python
min_pixels = 256*28*28
max_pixels = 1024*28*28
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
```
This ensures each image gets encoded using a number between 256-1024 tokens. The 28 comes from the fact that the model uses a patch size of 14 and a temporal patch size of 2 (14 x 2 = 28).

#### Multiple Image Inputs

By default, images and video content are directly included in the conversation. When handling multiple images, it's helpful to add labels to the images and videos for better reference. Users can control this behavior with the following settings:

```python
conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Hello, how are you?"}
]
},
{
"role": "assistant",
"content": "I'm doing well, thank you for asking. How can I assist you today?"
},
{
"role": "user",
"content": [
{"type": "text", "text": "Can you describe these images and video?"},
{"type": "image"},
{"type": "image"},
{"type": "video"},
{"type": "text", "text": "These are from my vacation."}
]
},
{
"role": "assistant",
"content": "I'd be happy to describe the images and video for you. Could you please provide more context about your vacation?"
},
{
"role": "user",
"content": "It was a trip to the mountains. Can you see the details in the images and video?"
}
]

# default:
prompt_without_id = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?<|vision_start|><|image_pad|><|vision_end|><|vision_start|><|image_pad|><|vision_end|><|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n'


# add ids
prompt_with_id = processor.apply_chat_template(conversation, add_generation_prompt=True, add_vision_id=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nPicture 1: <|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?Picture 2: <|vision_start|><|image_pad|><|vision_end|>Picture 3: <|vision_start|><|image_pad|><|vision_end|>Video 1: <|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n'

```

#### Flash-Attention 2 to speed up generation

First, make sure to install the latest version of Flash Attention 2:

```bash
pip install -U flash-attn --no-build-isolation
```

Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.

To load and run a model using FlashAttention-2, add `attn_implementation="flash_attention_2"` when loading the model:

```python
from transformers import Qwen2_5_VLForConditionalGeneration

model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-VL-7B-Instruct",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
```



## Qwen2_5_VLConfig

[[autodoc]] Qwen2_5_VLConfig

## Qwen2_5_VLImageProcessor

[[autodoc]] Qwen2_5_VLImageProcessor
- preprocess

## Qwen2_5_VLProcessor

[[autodoc]] Qwen2_5_VLProcessor

## Qwen2_5_VLModel

[[autodoc]] Qwen2_5_VLModel
- forward

## Qwen2_5_VLForConditionalGeneration

[[autodoc]] Qwen2_5_VLForConditionalGeneration
- forward
2 changes: 2 additions & 0 deletions docs/source/en/perf_infer_gpu_one.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,7 @@ FlashAttention-2 is currently supported for the following architectures:
* [Qwen2Audio](https://huggingface.co/docs/transformers/model_doc/qwen2_audio#transformers.Qwen2AudioEncoder)
* [Qwen2MoE](https://huggingface.co/docs/transformers/model_doc/qwen2_moe#transformers.Qwen2MoeModel)
* [Qwen2VL](https://huggingface.co/docs/transformers/model_doc/qwen2_vl#transformers.Qwen2VLModel)
* [Qwen2.5VL](https://huggingface.co/docs/transformers/model_doc/qwen2_5_vl#transformers.Qwen2_5_VLModel)
* [RAG](https://huggingface.co/docs/transformers/model_doc/rag#transformers.RagModel)
* [SpeechEncoderDecoder](https://huggingface.co/docs/transformers/model_doc/speech_encoder_decoder#transformers.SpeechEncoderDecoderModel)
* [VisionEncoderDecoder](https://huggingface.co/docs/transformers/model_doc/vision_encoder_decoder#transformers.VisionEncoderDecoderModel)
Expand Down Expand Up @@ -297,6 +298,7 @@ For now, Transformers supports SDPA inference and training for the following arc
* [Qwen2](https://huggingface.co/docs/transformers/model_doc/qwen2#transformers.Qwen2Model)
* [Qwen2Audio](https://huggingface.co/docs/transformers/model_doc/qwen2_audio#transformers.Qwen2AudioEncoder)
* [Qwen2MoE](https://huggingface.co/docs/transformers/model_doc/qwen2_moe#transformers.Qwen2MoeModel)
* [Qwen2.5VL](https://huggingface.co/docs/transformers/model_doc/qwen2_5_vl#transformers.Qwen2_5_VLModel)
* [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta#transformers.RobertaModel)
* [Sew](https://huggingface.co/docs/transformers/main/en/model_doc/sew#transformers.SEWModel)
* [SigLIP](https://huggingface.co/docs/transformers/model_doc/siglip)
Expand Down
22 changes: 22 additions & 0 deletions src/transformers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -708,6 +708,10 @@
"Qwen2Config",
"Qwen2Tokenizer",
],
"models.qwen2_5_vl": [
"Qwen2_5_VLConfig",
"Qwen2_5_VLProcessor",
],
"models.qwen2_audio": [
"Qwen2AudioConfig",
"Qwen2AudioEncoderConfig",
Expand Down Expand Up @@ -1263,6 +1267,7 @@
_import_structure["models.pixtral"].append("PixtralImageProcessor")
_import_structure["models.poolformer"].extend(["PoolFormerFeatureExtractor", "PoolFormerImageProcessor"])
_import_structure["models.pvt"].extend(["PvtImageProcessor"])
_import_structure["models.qwen2_5_vl"].extend(["Qwen2_5_VLImageProcessor"])
_import_structure["models.qwen2_vl"].extend(["Qwen2VLImageProcessor"])
_import_structure["models.rt_detr"].extend(["RTDetrImageProcessor"])
_import_structure["models.sam"].extend(["SamImageProcessor"])
Expand Down Expand Up @@ -3276,6 +3281,13 @@
"Qwen2PreTrainedModel",
]
)
_import_structure["models.qwen2_5_vl"].extend(
[
"Qwen2_5_VLForConditionalGeneration",
"Qwen2_5_VLModel",
"Qwen2_5_VLPreTrainedModel",
]
)
_import_structure["models.qwen2_audio"].extend(
[
"Qwen2AudioEncoder",
Expand Down Expand Up @@ -5783,6 +5795,10 @@
from .models.pvt import PvtConfig
from .models.pvt_v2 import PvtV2Config
from .models.qwen2 import Qwen2Config, Qwen2Tokenizer
from .models.qwen2_5_vl import (
Qwen2_5_VLConfig,
Qwen2_5_VLProcessor,
)
from .models.qwen2_audio import (
Qwen2AudioConfig,
Qwen2AudioEncoderConfig,
Expand Down Expand Up @@ -6362,6 +6378,7 @@
PoolFormerImageProcessor,
)
from .models.pvt import PvtImageProcessor
from .models.qwen2_5_vl import Qwen2_5_VLImageProcessor
from .models.qwen2_vl import Qwen2VLImageProcessor
from .models.rt_detr import RTDetrImageProcessor
from .models.sam import SamImageProcessor
Expand Down Expand Up @@ -7980,6 +7997,11 @@
Qwen2Model,
Qwen2PreTrainedModel,
)
from .models.qwen2_5_vl import (
Qwen2_5_VLForConditionalGeneration,
Qwen2_5_VLModel,
Qwen2_5_VLPreTrainedModel,
)
from .models.qwen2_audio import (
Qwen2AudioEncoder,
Qwen2AudioForConditionalGeneration,
Expand Down
Loading