metascroy commited on
Commit
0a8bc88
·
verified ·
1 Parent(s): 3d69139

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -218,7 +218,7 @@ So we first use a conversion script that converts the Hugging Face checkpoint ke
218
  python -m executorch.examples.models.phi_4_mini.convert_weights $(hf download pytorch/Phi-4-mini-instruct-INT8-INT4) pytorch_model_converted.bin
219
  ```
220
 
221
- Once we have the checkpoint, we export it to ExecuTorch with the XNNPACK backend as follows with a max_seq_length/max_context_length of 1024.
222
 
223
  (Note: ExecuTorch LLM export script requires config.json have certain key names. The correct config to use for the LLM export script is located at examples/models/phi_4_mini/config/config.json within the ExecuTorch repo.)
224
 
@@ -240,6 +240,8 @@ python-m executorch.examples.models.llama.export_llama \
240
 
241
  After that you can run the model in a mobile app (see [Running in a mobile app](#running-in-a-mobile-app)).
242
 
 
 
243
  # Paper: TorchAO: PyTorch-Native Training-to-Serving Model Optimization
244
  The model's quantization is powered by **TorchAO**, a framework presented in the paper [TorchAO: PyTorch-Native Training-to-Serving Model Optimization](https://huggingface.co/papers/2507.16099).
245
 
 
218
  python -m executorch.examples.models.phi_4_mini.convert_weights $(hf download pytorch/Phi-4-mini-instruct-INT8-INT4) pytorch_model_converted.bin
219
  ```
220
 
221
+ Once we have the checkpoint, we export it to ExecuTorch with a max_seq_length/max_context_length of 1024 to the XNNPACK backend as follows.
222
 
223
  (Note: ExecuTorch LLM export script requires config.json have certain key names. The correct config to use for the LLM export script is located at examples/models/phi_4_mini/config/config.json within the ExecuTorch repo.)
224
 
 
240
 
241
  After that you can run the model in a mobile app (see [Running in a mobile app](#running-in-a-mobile-app)).
242
 
243
+ (We try to keep these instructions up-to-date, but if you find they do not work, check out our [CI test in ExecuTorch](https://github.com/pytorch/executorch/blob/main/.ci/scripts/test_torchao_huggingface_checkpoints.sh) for the latest source of truth, and let us know we need to update our model card.)
244
+
245
  # Paper: TorchAO: PyTorch-Native Training-to-Serving Model Optimization
246
  The model's quantization is powered by **TorchAO**, a framework presented in the paper [TorchAO: PyTorch-Native Training-to-Serving Model Optimization](https://huggingface.co/papers/2507.16099).
247