File size: 1,998 Bytes
a8d5bc3 a3dcd69 a8d5bc3 a3dcd69 a8d5bc3 a3dcd69 a8d5bc3 a3dcd69 a8d5bc3 a3dcd69 a8d5bc3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
tags:
- vllm
- vision
- fp8
license: apache-2.0
license_link: >-
https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
- en
base_model: Qwen/Qwen2.5-VL-32B-Instruct
library_name: transformers
---
# Qwen2.5-VL-32B-Instruct-FP8-Dynamic
## Model Overview
- **Model Architecture:** Qwen2.5-VL-32B-Instruct
- **Input:** Vision-Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 5/3/2025
- **Version:** 1.0
- **Model Developers:** BC Card
Quantized version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="BCCard/Qwen2.5-VL-32B-Instruct-FP8-Dynamic",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
|