Update README.md
Browse files
README.md
CHANGED
|
@@ -17,6 +17,8 @@ tags:
|
|
| 17 |
This repo contains a [70% sparse Llama 2 7B](https://huggingface.co/neuralmagic/Llama-2-7b-pruned70-retrained-evolcodealpaca) finetuned for code generation tasks using the [Evolved CodeAlpaca](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) dataset.
|
| 18 |
It was then quantized to 8-bit weights + activations and exported to deploy with [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
|
| 19 |
|
|
|
|
|
|
|
| 20 |
**Authors**: Neural Magic, Cerebras
|
| 21 |
|
| 22 |
## Usage
|
|
|
|
| 17 |
This repo contains a [70% sparse Llama 2 7B](https://huggingface.co/neuralmagic/Llama-2-7b-pruned70-retrained-evolcodealpaca) finetuned for code generation tasks using the [Evolved CodeAlpaca](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) dataset.
|
| 18 |
It was then quantized to 8-bit weights + activations and exported to deploy with [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
|
| 19 |
|
| 20 |
+
Official model weights from [Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment](https://arxiv.org/abs/2405.03594).
|
| 21 |
+
|
| 22 |
**Authors**: Neural Magic, Cerebras
|
| 23 |
|
| 24 |
## Usage
|