helenai commited on
Commit
fd1d860
·
verified ·
1 Parent(s): d5cbbf4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md CHANGED
@@ -5,11 +5,15 @@ base_model:
5
 
6
  This is the [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) model, converted to OpenVINO, with int4 weights for the language model, int8 weights for the other models.
7
 
 
 
8
  To download the model, run `pip install huggingface-hub[cli]` and then:
9
  ```
10
  huggingface-cli download helenai/Qwen2.5-VL-7B-Instruct-ov-int4 --local-dir Qwen2.5-VL-7B-Instruct-ov-int4
11
  ```
12
 
 
 
13
  Use OpenVINO GenAI to run inference on this model. This model works with OpenVINO GenAI 2025.2 and later.
14
 
15
  - Install OpenVINO GenAI and pillow:
@@ -43,3 +47,39 @@ print(result.texts[0])
43
  ```
44
 
45
  See [OpenVINO GenAI repository](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#performing-visual-language-text-generation)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
  This is the [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) model, converted to OpenVINO, with int4 weights for the language model, int8 weights for the other models.
7
 
8
+ ## Download Model
9
+
10
  To download the model, run `pip install huggingface-hub[cli]` and then:
11
  ```
12
  huggingface-cli download helenai/Qwen2.5-VL-7B-Instruct-ov-int4 --local-dir Qwen2.5-VL-7B-Instruct-ov-int4
13
  ```
14
 
15
+ ## Run inference with OpenVINO GenAI
16
+
17
  Use OpenVINO GenAI to run inference on this model. This model works with OpenVINO GenAI 2025.2 and later.
18
 
19
  - Install OpenVINO GenAI and pillow:
 
47
  ```
48
 
49
  See [OpenVINO GenAI repository](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#performing-visual-language-text-generation)
50
+
51
+ ## Model export properties
52
+
53
+ Model export command:
54
+
55
+ ```
56
+ optimum-cli export openvino -m Qwen/Qwen2.5-VL-7B-Instruct --weight-format int4 Qwen2.5-VL-7B-Instruct-ov-int4
57
+ ```
58
+
59
+ ### Framework versions
60
+
61
+ ```
62
+ openvino : 2025.2.0-19140-c01cd93e24d-releases/2025/2
63
+ nncf : 2.17.0.dev0+c6296072
64
+ optimum_intel : 1.26.0.dev0+0e2ccef
65
+ optimum : 1.27.0
66
+ pytorch : 2.7.0+cpu
67
+ transformers : 4.51.3
68
+ ```
69
+
70
+ ### LLM export properties
71
+
72
+ ```
73
+ all_layers : False
74
+ awq : False
75
+ backup_mode : int8_asym
76
+ compression_format : dequantize
77
+ gptq : False
78
+ group_size : 128
79
+ ignored_scope : []
80
+ lora_correction : False
81
+ mode : int4_asym
82
+ ratio : 1.0
83
+ scale_estimation : False
84
+ sensitivity_metric : weight_quantization_error
85
+ ```