Commit
·
5cff35b
1
Parent(s):
dce14eb
add GPTQ-trained
Browse files
README.md
CHANGED
|
@@ -21,12 +21,38 @@ tags:
|
|
| 21 |
- The model responds with a structured json argument with the function name and arguments
|
| 22 |
|
| 23 |
Available models:
|
| 24 |
-
- fLlama-7B ([bitsandbytes NF4](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling)), ([GGML](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-GGML)), ([GPTQ](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-GPTQ)) - free
|
| 25 |
- fLlama-13B ([bitsandbytes NF4](https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling)), ([GPTQ](https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling-GPTQ)) - paid
|
| 26 |
|
| 27 |
## Inference with Google Colab and HuggingFace 🤗
|
| 28 |
|
| 29 |
-
**GPTQ (
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
Get started by saving your own copy of this [function calling chatbot](https://colab.research.google.com/drive/1u8x41Jx8WWtI-nzHOgqTxkS3Q_lcjaSX?usp=sharing).
|
| 31 |
You will be able to run inference using a free Colab notebook if you select a gpu runtime. See the notebook for more details.
|
| 32 |
|
|
@@ -138,29 +164,28 @@ It is recommended to handle cases where:
|
|
| 138 |
- There is no json object in the response
|
| 139 |
- The response contains text in addition to the json response
|
| 140 |
|
| 141 |
-
## Quanitization Configurations
|
| 142 |
-
|
| 143 |
-
The following `bitsandbytes` quantization config was used during training:
|
| 144 |
-
- load_in_8bit: False
|
| 145 |
-
- load_in_4bit: True
|
| 146 |
-
- llm_int8_threshold: 6.0
|
| 147 |
-
- llm_int8_skip_modules: None
|
| 148 |
-
- llm_int8_enable_fp32_cpu_offload: False
|
| 149 |
-
- llm_int8_has_fp16_weight: False
|
| 150 |
-
- bnb_4bit_quant_type: nf4
|
| 151 |
-
- bnb_4bit_use_double_quant: True
|
| 152 |
-
- bnb_4bit_compute_dtype: bfloat16
|
| 153 |
|
|
|
|
| 154 |
The following `bitsandbytes` quantization config was used during training:
|
| 155 |
-
-
|
| 156 |
-
-
|
| 157 |
-
-
|
| 158 |
-
-
|
| 159 |
-
-
|
| 160 |
-
-
|
| 161 |
-
-
|
| 162 |
-
-
|
| 163 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 164 |
|
| 165 |
~
|
| 166 |
|
|
|
|
| 21 |
- The model responds with a structured json argument with the function name and arguments
|
| 22 |
|
| 23 |
Available models:
|
| 24 |
+
- fLlama-7B ([bitsandbytes NF4](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling)), ([GGML](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-GGML)), ([GPTQ](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-GPTQ)), ([GPTQ-trained](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-GPTQ-trained)) - free
|
| 25 |
- fLlama-13B ([bitsandbytes NF4](https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling)), ([GPTQ](https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling-GPTQ)) - paid
|
| 26 |
|
| 27 |
## Inference with Google Colab and HuggingFace 🤗
|
| 28 |
|
| 29 |
+
**GPTQ-trained (fast + best accuracy) - this repo**
|
| 30 |
+
All other models are from bitsandbytes NF4 training. This model is specifically trained using GPTQ methods.
|
| 31 |
+
|
| 32 |
+
It is currently trickier to run because it's an adapter model. Try:
|
| 33 |
+
|
| 34 |
+
```
|
| 35 |
+
!pip install -q git+https://github.com/SunMarc/transformers.git@gptq_integration
|
| 36 |
+
!pip install -q git+https://github.com/SunMarc/optimum.git@add-gptq-marc
|
| 37 |
+
!pip install -q git+https://github.com/SunMarc/peft.git@peft_gptq
|
| 38 |
+
!pip install -q git+https://github.com/fxmarty/AutoGPTQ.git@patch-act-order-exllama #probably could speed this up by using wheels. takes 5 mins right now.
|
| 39 |
+
|
| 40 |
+
import transformers
|
| 41 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline
|
| 42 |
+
import torch
|
| 43 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 44 |
+
from auto_gptq.nn_modules.qlinear.qlinear_cuda_old import QuantLinear
|
| 45 |
+
|
| 46 |
+
# Script for model loading if using adapters
|
| 47 |
+
model_name_or_path = "ybelkada/llama-7b-GPTQ-test"
|
| 48 |
+
|
| 49 |
+
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # must be auto, cannot be cpu
|
| 50 |
+
|
| 51 |
+
adapter_model_name = 'Trelis/Llama-2-7b-chat-hf-function-calling-GPTQ-trained-adapters'
|
| 52 |
+
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
**GPTQ (fast + good accuracy)**
|
| 56 |
Get started by saving your own copy of this [function calling chatbot](https://colab.research.google.com/drive/1u8x41Jx8WWtI-nzHOgqTxkS3Q_lcjaSX?usp=sharing).
|
| 57 |
You will be able to run inference using a free Colab notebook if you select a gpu runtime. See the notebook for more details.
|
| 58 |
|
|
|
|
| 164 |
- There is no json object in the response
|
| 165 |
- The response contains text in addition to the json response
|
| 166 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 167 |
|
| 168 |
+
## Training procedure
|
| 169 |
The following `bitsandbytes` quantization config was used during training:
|
| 170 |
+
- quant_method: gptq
|
| 171 |
+
- bits: 4
|
| 172 |
+
- tokenizer: None
|
| 173 |
+
- dataset: None
|
| 174 |
+
- group_size: 128
|
| 175 |
+
- damp_percent: 0.01
|
| 176 |
+
- desc_act: False
|
| 177 |
+
- sym: True
|
| 178 |
+
- true_sequential: True
|
| 179 |
+
- use_cuda_fp16: False
|
| 180 |
+
- model_seqlen: None
|
| 181 |
+
- block_name_to_quantize: None
|
| 182 |
+
- module_name_preceding_first_block: None
|
| 183 |
+
- batch_size: 1
|
| 184 |
+
- pad_token_id: None
|
| 185 |
+
- disable_exllama: True
|
| 186 |
+
|
| 187 |
+
### Framework versions
|
| 188 |
+
- PEFT 0.5.0.dev0
|
| 189 |
|
| 190 |
~
|
| 191 |
|