Update README.md
Browse files
README.md
CHANGED
|
@@ -7,7 +7,16 @@ tags:
|
|
| 7 |
# Qwen2-1.5B-Instruct-FP8
|
| 8 |
|
| 9 |
## Model Overview
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
## Usage and Creation
|
| 13 |
Produced using [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py).
|
|
@@ -37,8 +46,54 @@ model.quantize(examples)
|
|
| 37 |
model.save_quantized(quantized_model_dir)
|
| 38 |
```
|
| 39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
## Evaluation
|
| 41 |
|
|
|
|
|
|
|
| 42 |
### Open LLM Leaderboard evaluation scores
|
| 43 |
| | Qwen2-1.5B-Instruct | Qwen2-1.5B-Instruct-FP8<br>(this model) |
|
| 44 |
| :------------------: | :----------------------: | :------------------------------------------------: |
|
|
|
|
| 7 |
# Qwen2-1.5B-Instruct-FP8
|
| 8 |
|
| 9 |
## Model Overview
|
| 10 |
+
* <h3 style="display: inline;">Model Architecture:</h3> Based on and identical to the Qwen2-1.5B-Instruct architecture
|
| 11 |
+
* <h3 style="display: inline;">Model Optimizations:</h3> Weights and activations quantized to FP8
|
| 12 |
+
* <h3 style="display: inline;">Release Date:</h3> June 14, 2024
|
| 13 |
+
* <h3 style="display: inline;">Model Developers:</h3> Neural Magic
|
| 14 |
+
|
| 15 |
+
Qwen2-1.5B-Instruct quantized to FP8 weights and activations using per-tensor quantization through the AutoFP8 repository, ready for inference with vLLM >= 0.5.0.
|
| 16 |
+
Calibrated with 512 UltraChat samples to achieve 99% performance recovery on the Open LLM Benchmark evaluations.
|
| 17 |
+
Reduces space on disk by ~40%.
|
| 18 |
+
Part of the FP8 LLMs for vLLM collection.
|
| 19 |
+
|
| 20 |
|
| 21 |
## Usage and Creation
|
| 22 |
Produced using [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py).
|
|
|
|
| 46 |
model.save_quantized(quantized_model_dir)
|
| 47 |
```
|
| 48 |
|
| 49 |
+
Evaluated through vLLM with the following script:
|
| 50 |
+
|
| 51 |
+
```
|
| 52 |
+
#!/bin/bash
|
| 53 |
+
|
| 54 |
+
# Example usage:
|
| 55 |
+
# CUDA_VISIBLE_DEVICES=0 ./eval_openllm.sh "neuralmagic/Qwen2-1.5B-Instruct-FP8" "tensor_parallel_size=1,max_model_len=4096,add_bos_token=True,gpu_memory_utilization=0.7"
|
| 56 |
+
|
| 57 |
+
export MODEL_DIR=${1}
|
| 58 |
+
export MODEL_ARGS=${2}
|
| 59 |
+
|
| 60 |
+
declare -A tasks_fewshot=(
|
| 61 |
+
["arc_challenge"]=25
|
| 62 |
+
["winogrande"]=5
|
| 63 |
+
["truthfulqa_mc2"]=0
|
| 64 |
+
["hellaswag"]=10
|
| 65 |
+
["mmlu"]=5
|
| 66 |
+
["gsm8k"]=5
|
| 67 |
+
)
|
| 68 |
+
|
| 69 |
+
declare -A batch_sizes=(
|
| 70 |
+
["arc_challenge"]="auto"
|
| 71 |
+
["winogrande"]="auto"
|
| 72 |
+
["truthfulqa_mc2"]="auto"
|
| 73 |
+
["hellaswag"]="auto"
|
| 74 |
+
["mmlu"]=1
|
| 75 |
+
["gsm8k"]="auto"
|
| 76 |
+
)
|
| 77 |
+
|
| 78 |
+
for TASK in "${!tasks_fewshot[@]}"; do
|
| 79 |
+
NUM_FEWSHOT=${tasks_fewshot[$TASK]}
|
| 80 |
+
BATCH_SIZE=${batch_sizes[$TASK]}
|
| 81 |
+
lm_eval --model vllm \
|
| 82 |
+
--model_args pretrained=$MODEL_DIR,$MODEL_ARGS \
|
| 83 |
+
--tasks ${TASK} \
|
| 84 |
+
--num_fewshot ${NUM_FEWSHOT} \
|
| 85 |
+
--write_out \
|
| 86 |
+
--show_config \
|
| 87 |
+
--device cuda \
|
| 88 |
+
--batch_size ${BATCH_SIZE} \
|
| 89 |
+
--output_path="results/${TASK}"
|
| 90 |
+
done
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
## Evaluation
|
| 94 |
|
| 95 |
+
Evaluated on the Open LLM Leaderboard evaluations through vLLM.
|
| 96 |
+
|
| 97 |
### Open LLM Leaderboard evaluation scores
|
| 98 |
| | Qwen2-1.5B-Instruct | Qwen2-1.5B-Instruct-FP8<br>(this model) |
|
| 99 |
| :------------------: | :----------------------: | :------------------------------------------------: |
|