nm-research commited on
Commit
6043ef6
·
verified ·
1 Parent(s): 492f5db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -2
README.md CHANGED
@@ -24,7 +24,7 @@ library_name: transformers
24
  - **Model Developers:** Neural Magic
25
 
26
  Quantized version of [ibm-granite/granite-3.1-2b-instruct](https://huggingface.co/ibm-granite/granite-3.1-2b-instruct).
27
- It achieves an average score of xxxx on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves xxxx.
28
 
29
  ### Model Optimizations
30
 
@@ -178,4 +178,64 @@ evalplus.evaluate \
178
  | HumanEval Pass@1 | 53.40 | 54.90 |
179
 
180
 
181
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  - **Model Developers:** Neural Magic
25
 
26
  Quantized version of [ibm-granite/granite-3.1-2b-instruct](https://huggingface.co/ibm-granite/granite-3.1-2b-instruct).
27
+ It achieves an average score of 61.84 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 61.98.
28
 
29
  ### Model Optimizations
30
 
 
178
  | HumanEval Pass@1 | 53.40 | 54.90 |
179
 
180
 
181
+ ## Inference Performance
182
+
183
+
184
+ This model achieves up to 1.2x speedup in single-stream deployment on L40 GPUs.
185
+ The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.6.post1, and [GuideLLM](https://github.com/neuralmagic/guidellm).
186
+
187
+ ### Single-stream performance (measured with vLLM version 0.6.6.post1)
188
+ <table>
189
+ <tr>
190
+ <td></td>
191
+ <td></td>
192
+ <td></td>
193
+ <th style="text-align: center;" colspan="7" >Latency (s)</th>
194
+ </tr>
195
+ <tr>
196
+ <th>GPU class</th>
197
+ <th>Model</th>
198
+ <th>Speedup</th>
199
+ <th>Code Completion<br>prefill: 256 tokens<br>decode: 1024 tokens</th>
200
+ <th>Docstring Generation<br>prefill: 768 tokens<br>decode: 128 tokens</th>
201
+ <th>Code Fixing<br>prefill: 1024 tokens<br>decode: 1024 tokens</th>
202
+ <th>RAG<br>prefill: 1024 tokens<br>decode: 128 tokens</th>
203
+ <th>Instruction Following<br>prefill: 256 tokens<br>decode: 128 tokens</th>
204
+ <th>Multi-turn Chat<br>prefill: 512 tokens<br>decode: 256 tokens</th>
205
+ <th>Large Summarization<br>prefill: 4096 tokens<br>decode: 512 tokens</th>
206
+ </tr>
207
+ <tr>
208
+ <td style="vertical-align: middle;" rowspan="3" >L40</td>
209
+ <td>granite-3.1-2b-instruct</td>
210
+ <td></td>
211
+ <td>9.3</td>
212
+ <td>1.2</td>
213
+ <td>9.4</td>
214
+ <td>1.2</td>
215
+ <td>1.2</td>
216
+ <td>2.3</td>
217
+ <td>5.0</td>
218
+ </tr>
219
+ <tr>
220
+ <td>granite-3.1-2b-instruct-FP8-dynamic<br>(this model)</td>
221
+ <td>1.26</td>
222
+ <td>7.3</td>
223
+ <td>0.9</td>
224
+ <td>7.4</td>
225
+ <td>1.0</td>
226
+ <td>0.9</td>
227
+ <td>1.8</td>
228
+ <td>4.1</td>
229
+ </tr>
230
+ <tr>
231
+ <td>granite-3.1-2b-instruct-quantized.w4a16</td>
232
+ <td>1.88</td>
233
+ <td>4.8</td>
234
+ <td>0.6</td>
235
+ <td>4.9</td>
236
+ <td>0.6</td>
237
+ <td>0.6</td>
238
+ <td>1.2</td>
239
+ <td>2.8</td>
240
+ </tr>
241
+ </table>