|
--- |
|
license: mit |
|
language: |
|
- en |
|
base_model: |
|
- unsloth/phi-4 |
|
- microsoft/phi-4 |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# Phi-4 converted for ExLlamaV2 |
|
|
|
[ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs.](https://github.com/turboderp-org/exllamav2) |
|
|
|
|
|
| | Quant type | File Size | Vram*| |
|
| -------- | ---------- | --------- | -------- | |
|
| [phi-4 hb8 3bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_3bpw) | 3 bits per weight | 6.66 GB | **10,3 GB** | |
|
| [phi-4 hb8 4bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_4bpw) | 4 bits per weight | 8.36 GB | **11,9 GB** | |
|
| [phi-4 hb8 5bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_5bpw) | 5 bits per weight | 10.1 GB | **13,5 GB** | |
|
| [phi-4 hb8 6bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_6bpw) | 6 bits per weight | 11.8 GB | **15,1 GB** | |
|
| [phi-4 hb8 7bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_7bpw) | 7 bits per weight | 13.5 GB | **16,7 GB** | |
|
| [phi-4 hb8 8bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_8bpw) | 8 bits per weight | 15.2 GB | **18,2 GB** | |
|
|
|
<sub>*approximate value at **16k context, FP16 cache**.<sup> |
|
|
|
--------------------------------------------- |
|
|
|
# Phi-4 Model Card |
|
|
|
[Phi-4 Technical Report](https://arxiv.org/pdf/2412.08905) |
|
|
|
## Model Summary |
|
|
|
| | | |
|
|-------------------------|-------------------------------------------------------------------------------| |
|
| **Developers** | Microsoft Research | |
|
| **Description** | `phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures | |
|
| **Architecture** | 14B parameters, dense decoder-only Transformer model | |
|
| **Context length** | 16384 tokens | |
|
|
|
## Usage |
|
|
|
### Input Formats |
|
|
|
Given the nature of the training data, `phi-4` is best suited for prompts using the chat format as follows: |
|
|
|
```bash |
|
<|im_start|>system<|im_sep|> |
|
You are a medieval knight and must provide explanations to modern people.<|im_end|> |
|
<|im_start|>user<|im_sep|> |
|
How should I explain the Internet?<|im_end|> |
|
<|im_start|>assistant<|im_sep|> |
|
``` |
|
|
|
### With ExUI: |
|
|
|
Add Phi-4 prompt format: |
|
|
|
Edit/replace exui/backend/prompts.py with https://huggingface.co/cmh/phi-4_exl2/raw/main/backend/prompts.py |
|
|