phi-4_exl2 / README.md
cmh's picture
Update README.md
89f0d99 verified
|
raw
history blame
2.94 kB
metadata
license: mit
language:
  - en
base_model:
  - unsloth/phi-4
  - microsoft/phi-4
pipeline_tag: text-generation

Phi-4 converted for ExLlamaV2

ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs.

Filename Quant type File Size ~Vram*
phi-4_hb8_3bpw 3.00 bits per weight 6.66 GB 10,3 GB
phi-4_hb8_4bpw 4.00 bits per weight 8.36 GB 11,9 GB
phi-4_hb8_5bpw 5.00 bits per weight 10.1 GB 13,5 GB
phi-4_hb8_6bpw 6.00 bits per weight 11.8 GB 15,1 GB
phi-4_hb8_7bpw 7.00 bits per weight 13.5 GB 16,7 GB
phi-4_hb8_8bpw 8.00 bits per weight 15.2 GB 18,2 GB

*approximate value at 16k context, FP16 cache.


Phi-4 Model Card

Phi-4 Technical Report

Model Summary

Developers Microsoft Research
Description phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.

phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures
Architecture 14B parameters, dense decoder-only Transformer model
Context length 16384 tokens

Usage

Input Formats

Given the nature of the training data, phi-4 is best suited for prompts using the chat format as follows:

<|im_start|>system<|im_sep|>
You are a medieval knight and must provide explanations to modern people.<|im_end|>
<|im_start|>user<|im_sep|>
How should I explain the Internet?<|im_end|>
<|im_start|>assistant<|im_sep|>

With ExUI:

Add Phi-4 prompt format:

Edit/replace exui/backend/prompts.py with https://huggingface.co/cmh/phi-4_exl2/raw/main/backend/prompts.py