Update README.md
Browse files
README.md
CHANGED
@@ -12,17 +12,17 @@ This repo contains 4-bit quantized (using bitsandbytes) model Mistral AI_'s Mist
|
|
12 |
|
13 |
## Model Details
|
14 |
|
15 |
-
Model creator: [Mistral AI_](https://huggingface.co/mistralai)
|
16 |
-
Original model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
17 |
|
18 |
|
19 |
### About 4 bit quantization using bitsandbytes
|
20 |
|
21 |
-
QLoRA: Efficient Finetuning of Quantized LLMs: [arXiv - QLoRA: Efficient Finetuning of Quantized LLMs] (https://arxiv.org/abs/2305.14314)
|
22 |
|
23 |
-
Hugging Face Blog post on 4-bit quantization using bitsandbytes: [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA] (https://huggingface.co/blog/4bit-transformers-bitsandbytes)
|
24 |
|
25 |
-
bitsandbytes github repo: [bitsandbytes github repo] (https://github.com/TimDettmers/bitsandbytes)
|
26 |
|
27 |
|
28 |
### Model Description
|
|
|
12 |
|
13 |
## Model Details
|
14 |
|
15 |
+
- Model creator: [Mistral AI_](https://huggingface.co/mistralai)
|
16 |
+
- Original model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
17 |
|
18 |
|
19 |
### About 4 bit quantization using bitsandbytes
|
20 |
|
21 |
+
- QLoRA: Efficient Finetuning of Quantized LLMs: [arXiv - QLoRA: Efficient Finetuning of Quantized LLMs] (https://arxiv.org/abs/2305.14314)
|
22 |
|
23 |
+
- Hugging Face Blog post on 4-bit quantization using bitsandbytes: [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA] (https://huggingface.co/blog/4bit-transformers-bitsandbytes)
|
24 |
|
25 |
+
- bitsandbytes github repo: [bitsandbytes github repo] (https://github.com/TimDettmers/bitsandbytes)
|
26 |
|
27 |
|
28 |
### Model Description
|