gpt-oss-120b — MLX 8-bit (group size 32)

Summary. This is an 8-bit (int8) MLX quantization of gpt-oss-120B. Group size is 32. Built for Apple Silicon with Metal acceleration.

  • Base model: openai/gpt-oss-120b (Apache-2.0)
  • Quantization: MLX int8, q_group_size=32 (some tensors may remain 16-bit for stability)
  • Files: MLX weight shards + config.json; tokenizer files included for drop-in use
  • Intended use: local inference / research on M-series Macs
  • Not intended for: safety-critical decisions; outputs may be inaccurate or biased

Requirements

Runs on Apple Silicon (M1 or newer) with macOS ≥ 13.5 via MLX (Metal).

  • Not supported: Intel macOS / Linux / Windows (consider a GGUF build + llama.cpp instead).
  • Memory guidance: large unified memory recommended (e.g., 64 GB+; 96 GB provides comfortable headroom). The effective GPU working set is capped by Metal’s budget; keep 5–10% headroom.

How to use (MLX)

pip install mlx-lm
# Python API (uses tokenizer bundled with this repo)
from mlx_lm import load, generate

model, tokenizer = load("halley-ai/gpt-oss-120b-MLX-8bit-gs32")
print(generate(
    model, tokenizer,
    prompt="Explain the Chudnovsky algorithm to compute π.",
    max_tokens=256, max_kv_size=512
))
# CLI
python -m mlx_lm generate --model halley-ai/gpt-oss-120b-MLX-8bit-gs32 \
  --prompt "Explain the Chudnovsky algorithm to compute pi." \
  --max-kv-size 512 --max-tokens 256

Evaluation

Perplexity (PPL) streaming evaluation on WikiText-2 (raw, test); fast preset with window=stride=4096, ~100k tokens, EOS inserted between docs.

Variant PPL (ctx=4096, fast)
MLX 8-bit (gs=32) 7.39
MLX bf16 (reference) 7.38
MLX 6-bit (gs=64) 7.40

Notes:

  • Results from local runs on Apple Silicon using MLX; numbers vary slightly with tokenizer details, logits dtype, and token subset.
  • For more sensitive comparisons, use overlapping windows (e.g., --stride 512) and evaluate the full split.

Conversion details (provenance)

python -m mlx_lm convert \
  --hf-path openai/gpt-oss-120b \
  --mlx-path gpt-oss-120b-MLX-8bit-gs32 \
  --q-bits 8 --q-group-size 32 -q
  • Some tensors (e.g., embeddings/norms/router) may remain 16-bit for numerical stability.

Sibling & reference models

  • halley-ai/gpt-oss-120b-MLX-bf16 (non-quantized reference)
  • halley-ai/gpt-oss-120b-MLX-6bit-gs64 (smaller/faster variant)

Limitations & biases

Outputs may be factually wrong or unsafe. Do not use for medical, legal, or financial decisions without human review. Large models can be sensitive to prompts; prefer explicit instructions and structure.

License & credits

  • License: Apache-2.0 (inherits from base model)
  • Base model: OpenAI gpt-oss-120B
  • Quantization: Halley AI Lab (MLX int8, gs=32)
  • Please cite both the base model and this repository when you use the weights.
Downloads last month
36
Safetensors
Model size
117B params
Tensor type
BF16
·
U32
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for halley-ai/gpt-oss-120b-MLX-8bit-gs32

Quantized
(50)
this model