update readme
Browse files
README.md
CHANGED
|
@@ -105,9 +105,9 @@ For more information, please refer to our [Github repo](https://github.com/QwenL
|
|
| 105 |
|
| 106 |
### 效果评测
|
| 107 |
|
| 108 |
-
我们对BF16和Int4
|
| 109 |
|
| 110 |
-
We illustrate the
|
| 111 |
|
| 112 |
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
|
| 113 |
| ------------- | :--------: | :----------: | :----: | :--------: |
|
|
|
|
| 105 |
|
| 106 |
### 效果评测
|
| 107 |
|
| 108 |
+
我们对BF16和Int4模型在基准评测上做了测试(使用zero-shot设置),发现量化模型效果损失较小,结果如下所示:
|
| 109 |
|
| 110 |
+
We illustrate the zero-shot performance of both BF16 and Int4 models on the benchmark, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
|
| 111 |
|
| 112 |
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
|
| 113 |
| ------------- | :--------: | :----------: | :----: | :--------: |
|