|
--- |
|
library_name: transformers |
|
tags: |
|
- torchao |
|
- phi |
|
- phi4 |
|
- nlp |
|
- code |
|
- math |
|
- chat |
|
- conversational |
|
license: mit |
|
language: |
|
- multilingual |
|
base_model: |
|
- microsoft/Phi-4-mini-instruct |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
[Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) is quantized by the PyTorch team using [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) with 8-bit embeddings and 8-bit dynamic activations with 4-bit weight linears (INT8-INT4). |
|
The model is suitable for mobile deployment with [ExecuTorch](https://github.com/pytorch/executorch). |
|
|
|
We provide the [quantized pte](https://huggingface.co/pytorch/Phi-4-mini-instruct-INT8-INT4/blob/main/model.pte) for direct use in ExecuTorch. |
|
(The provided pte file is exported with at max_seq_length/max_context_length of 1024; if you wish to change this, re-export the quantized model following the instructions in [Exporting to ExecuTorch](#exporting-to-executorch).) |
|
|
|
# Running in a mobile app |
|
The [pte file](https://huggingface.co/pytorch/Phi-4-mini-instruct-INT8-INT4/blob/main/model.pte) can be run with ExecuTorch on a mobile phone. See the [instructions](https://pytorch.org/executorch/main/llm/llama-demo-ios.html) for doing this in iOS. |
|
On iPhone 15 Pro, the model runs at 17.3 tokens/sec and uses 3206 Mb of memory. |
|
|
|
 |
|
|
|
⚠️ **Caveat:** Our mobile demo apps have **regressed support for the Phi-4 tokenizer**, so this model will not currently run in our official apps. If you are using your own app and runner, you can still load and run the `.pte` file successfully. See https://github.com/pytorch/executorch/issues/14077 for details and tracking. |
|
|
|
# Quantization Recipe |
|
|
|
First need to install the required packages: |
|
```Shell |
|
pip install git+https://github.com/huggingface/transformers@main |
|
pip install --pre torchao torch --index-url https://download.pytorch.org/whl/nightly/cu126 |
|
``` |
|
|
|
## Untie Embedding Weights |
|
We want to quantize the embedding and lm_head differently. Since those layers are tied, we first need to untie the model: |
|
|
|
```Py |
|
from transformers import ( |
|
AutoModelForCausalLM, |
|
AutoProcessor, |
|
AutoTokenizer, |
|
) |
|
import torch |
|
|
|
model_id = "microsoft/Phi-4-mini-instruct" |
|
untied_model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto") |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
|
print(untied_model) |
|
from transformers.modeling_utils import find_tied_parameters |
|
print("tied weights:", find_tied_parameters(untied_model)) |
|
if getattr(untied_model.config.get_text_config(decoder=True), "tie_word_embeddings"): |
|
setattr(untied_model.config.get_text_config(decoder=True), "tie_word_embeddings", False) |
|
|
|
untied_model._tied_weights_keys = [] |
|
untied_model.lm_head.weight = torch.nn.Parameter(untied_model.lm_head.weight.clone()) |
|
|
|
print("tied weights:", find_tied_parameters(untied_model)) |
|
|
|
USER_ID = "YOUR_USER_ID" |
|
MODEL_NAME = model_id.split("/")[-1] |
|
save_to = f"{USER_ID}/{MODEL_NAME}-untied-weights" |
|
|
|
untied_model.push_to_hub(save_to) |
|
tokenizer.push_to_hub(save_to) |
|
|
|
# or save locally |
|
save_to_local_path = f"{MODEL_NAME}-untied-weights" |
|
untied_model.save_pretrained(save_to_local_path) |
|
tokenizer.save_pretrained(save_to) |
|
``` |
|
|
|
Note: to `push_to_hub` you need to run |
|
```Shell |
|
pip install -U "huggingface_hub[cli]" |
|
huggingface-cli login |
|
``` |
|
and use a token with write access, from https://huggingface.co/settings/tokens |
|
|
|
## Quantization |
|
|
|
We used following code to get the quantized model: |
|
|
|
```Py |
|
from transformers import ( |
|
AutoModelForCausalLM, |
|
AutoProcessor, |
|
AutoTokenizer, |
|
TorchAoConfig, |
|
) |
|
from torchao.quantization.quant_api import ( |
|
IntxWeightOnlyConfig, |
|
Int8DynamicActivationIntxWeightConfig, |
|
ModuleFqnToConfig, |
|
quantize_, |
|
) |
|
from torchao.quantization.granularity import PerGroup, PerAxis |
|
import torch |
|
|
|
# we start from the model with untied weights |
|
model_id = "microsoft/Phi-4-mini-instruct" |
|
USER_ID = "YOUR_USER_ID" |
|
MODEL_NAME = model_id.split("/")[-1] |
|
untied_model_id = f"{USER_ID}/{MODEL_NAME}-untied-weights" |
|
untied_model_local_path = f"{MODEL_NAME}-untied-weights" |
|
|
|
embedding_config = IntxWeightOnlyConfig( |
|
weight_dtype=torch.int8, |
|
granularity=PerAxis(0), |
|
) |
|
linear_config = Int8DynamicActivationIntxWeightConfig( |
|
weight_dtype=torch.int4, |
|
weight_granularity=PerGroup(32), |
|
weight_scale_dtype=torch.bfloat16, |
|
) |
|
quant_config = ModuleFqnToConfig({"_default": linear_config, "model.embed_tokens": embedding_config}) |
|
quantization_config = TorchAoConfig(quant_type=quant_config, include_input_output_embeddings=True, modules_to_not_convert=[]) |
|
|
|
# either use `untied_model_id` or `untied_model_local_path` |
|
quantized_model = AutoModelForCausalLM.from_pretrained(untied_model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config) |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
|
# Push to hub |
|
MODEL_NAME = model_id.split("/")[-1] |
|
save_to = f"{USER_ID}/{MODEL_NAME}-INT8-INT4" |
|
quantized_model.push_to_hub(save_to, safe_serialization=False) |
|
tokenizer.push_to_hub(save_to) |
|
|
|
# Manual testing |
|
prompt = "Hey, are you conscious? Can you talk to me?" |
|
messages = [ |
|
{ |
|
"role": "system", |
|
"content": "", |
|
}, |
|
{"role": "user", "content": prompt}, |
|
] |
|
templated_prompt = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True, |
|
) |
|
print("Prompt:", prompt) |
|
print("Templated prompt:", templated_prompt) |
|
inputs = tokenizer( |
|
templated_prompt, |
|
return_tensors="pt", |
|
).to("cuda") |
|
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128) |
|
output_text = tokenizer.batch_decode( |
|
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False |
|
) |
|
print("Response:", output_text[0][len(prompt):]) |
|
``` |
|
|
|
The response from the manual testing is: |
|
|
|
``` |
|
Hello! As an AI, I don't have consciousness in the way humans do, but I am fully operational and here to assist you. How can I help you today? |
|
``` |
|
|
|
# Model Quality |
|
|
|
We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model. |
|
|
|
| Benchmark | | | |
|
|----------------------------------|----------------|---------------------------| |
|
| | Phi-4-mini-ins | Phi-4-mini-instruct-INT8-INT4 | |
|
| **Popular aggregated benchmark** | | | |
|
| mmlu (0 shot) | 66.73 | 60.75 | |
|
| mmlu_pro (5-shot) | 46.43 | 11.75 | |
|
| **Reasoning** | | | |
|
| arc_challenge | 56.91 | 48.46 | |
|
| gpqa_main_zeroshot | 30.13 | 30.80 | |
|
| hellaswag | 54.57 | 50.35 | |
|
| openbookqa | 33.00 | 30.40 | |
|
| piqa (0-shot) | 77.64 | 74.43 | |
|
| siqa | 49.59 | 44.98 | |
|
| truthfulqa_mc2 (0-shot) | 48.39 | 51.35 | |
|
| winogrande (0-shot) | 71.11 | 70.32 | |
|
| **Multilingual** | | | |
|
| mgsm_en_cot_en | 60.80 | 57.60 | |
|
| **Math** | | | |
|
| gsm8k (5-shot) | 81.88 | 61.71 | |
|
| Mathqa (0-shot) | 42.31 | 36.95 | |
|
| **Overall** | 55.35 | 48.45 | |
|
|
|
<details> |
|
<summary> Reproduce Model Quality Results </summary> |
|
|
|
Need to install lm-eval from source: https://github.com/EleutherAI/lm-evaluation-harness#install |
|
|
|
## baseline |
|
```Shell |
|
lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks hellaswag --device cuda:0 --batch_size 8 |
|
``` |
|
|
|
## int8 dynamic activation and int4 weight quantization (INT8-INT4) |
|
```Shell |
|
lm_eval --model hf --model_args pretrained=pytorch/Phi-4-mini-instruct-INT8-INT4 --tasks hellaswag --device cuda:0 --batch_size 8 |
|
``` |
|
</details> |
|
|
|
# Exporting to ExecuTorch |
|
|
|
We can run the quantized model on a mobile phone using [ExecuTorch](https://github.com/pytorch/executorch). |
|
Once ExecuTorch is [set-up](https://pytorch.org/executorch/main/getting-started.html), exporting and running the model on device is a breeze. |
|
|
|
ExecuTorch's LLM export scripts require the checkpoint keys and parameters have certain names, which differ from those used in Hugging Face. |
|
So we first use a script that converts the Hugging Face checkpoint key names to ones that ExecuTorch expects: |
|
```Shell |
|
python -m executorch.examples.models.phi_4_mini.convert_weights $(hf download pytorch/Phi-4-mini-instruct-INT8-INT4) pytorch_model_converted.bin |
|
``` |
|
|
|
Once we have the checkpoint, we export it to ExecuTorch with a max_seq_length/max_context_length of 1024 to the XNNPACK backend as follows. |
|
|
|
(Note: ExecuTorch LLM export script requires config.json have certain key names. The correct config to use for the LLM export script is located at examples/models/phi_4_mini/config/config.json within the ExecuTorch repo.) |
|
|
|
```Shell |
|
python-m executorch.examples.models.llama.export_llama \ |
|
--model "phi_4_mini" \ |
|
--checkpoint pytorch_model_converted.bin \ |
|
--params examples/models/phi_4_mini/config/config.json \ |
|
--output_name model.pte \ |
|
-kv \ |
|
--use_sdpa_with_kv_cache \ |
|
-X \ |
|
--xnnpack-extended-ops \ |
|
--max_context_length 1024 \ |
|
--max_seq_length 1024 \ |
|
--dtype fp32 \ |
|
--metadata '{"get_bos_id":199999, "get_eos_ids":[200020,199999]}' |
|
``` |
|
|
|
After that you can run the model in a mobile app (see [Running in a mobile app](#running-in-a-mobile-app)). |
|
|
|
(We try to keep these instructions up-to-date, but if you find they do not work, check out our [CI test in ExecuTorch](https://github.com/pytorch/executorch/blob/main/.ci/scripts/test_torchao_huggingface_checkpoints.sh) for the latest source of truth, and let us know we need to update our model card.) |
|
|
|
# Paper: TorchAO: PyTorch-Native Training-to-Serving Model Optimization |
|
The model's quantization is powered by **TorchAO**, a framework presented in the paper [TorchAO: PyTorch-Native Training-to-Serving Model Optimization](https://huggingface.co/papers/2507.16099). |
|
|
|
**Abstract:** We present TorchAO, a PyTorch-native model optimization framework leveraging quantization and sparsity to provide an end-to-end, training-to-serving workflow for AI models. TorchAO supports a variety of popular model optimization techniques, including FP8 quantized training, quantization-aware training (QAT), post-training quantization (PTQ), and 2:4 sparsity, and leverages a novel tensor subclass abstraction to represent a variety of widely-used, backend agnostic low precision data types, including INT4, INT8, FP8, MXFP4, MXFP6, and MXFP8. TorchAO integrates closely with the broader ecosystem at each step of the model optimization pipeline, from pre-training (TorchTitan) to fine-tuning (TorchTune, Axolotl) to serving (HuggingFace, vLLM, SGLang, ExecuTorch), connecting an otherwise fragmented space in a single, unified workflow. TorchAO has enabled recent launches of the quantized Llama 3.2 1B/3B and LlamaGuard3-8B models and is open-source at this https URL . |
|
|
|
# Resources |
|
* **Official TorchAO GitHub Repository:** [https://github.com/pytorch/ao](https://github.com/pytorch/ao) |
|
* **TorchAO Documentation:** [https://docs.pytorch.org/ao/stable/index.html](https://docs.pytorch.org/ao/stable/index.html) |
|
|
|
# Disclaimer |
|
PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations. |
|
|
|
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein. |