Updated Readme
Browse files
README.md
CHANGED
@@ -18,11 +18,11 @@ This repo contains 4-bit quantized (using bitsandbytes) model Mistral AI_'s Mist
|
|
18 |
|
19 |
### About 4 bit quantization using bitsandbytes
|
20 |
|
21 |
-
- QLoRA: Efficient Finetuning of Quantized LLMs: [arXiv - QLoRA: Efficient Finetuning of Quantized LLMs]
|
22 |
|
23 |
-
- Hugging Face Blog post on 4-bit quantization using bitsandbytes: [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA]
|
24 |
|
25 |
-
- bitsandbytes github repo: [bitsandbytes github repo]
|
26 |
|
27 |
|
28 |
### Model Description
|
|
|
18 |
|
19 |
### About 4 bit quantization using bitsandbytes
|
20 |
|
21 |
+
- QLoRA: Efficient Finetuning of Quantized LLMs: [arXiv - QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314)
|
22 |
|
23 |
+
- Hugging Face Blog post on 4-bit quantization using bitsandbytes: [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes)
|
24 |
|
25 |
+
- bitsandbytes github repo: [bitsandbytes github repo](https://github.com/TimDettmers/bitsandbytes)
|
26 |
|
27 |
|
28 |
### Model Description
|