inferencerlabs commited on
Commit
0382ddb
·
verified ·
1 Parent(s): 1f6afe2

Upload complete model

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
  - mlx
8
  base_model: openai/gpt-oss-120b
9
  ---
10
- **See gpt-oss-120b 6.5bit MLX in action - [demonstration video](https://youtube.com/xcreate)**
11
 
12
  *q6.5bit quant typically achieves 1.128 perplexity in our testing which is equivalent to q8.*
13
  | Quantization | Perplexity |
@@ -23,4 +23,4 @@ base_model: openai/gpt-oss-120b
23
  * Built with a modified version of [MLX](https://github.com/ml-explore/mlx) 0.26
24
  * Memory usage: ~95 GB
25
  * Expect ~60 tokens/s
26
- * For more details see [demonstration video](https://youtube.com/xcreate) or visit [OpenAI gpt-oss-20b](https://huggingface.co/openai/gpt-oss-120b).
 
7
  - mlx
8
  base_model: openai/gpt-oss-120b
9
  ---
10
+ **See gpt-oss-120b 6.5bit MLX in action - [demonstration video](https://youtu.be/mlpFG8e_fLw)**
11
 
12
  *q6.5bit quant typically achieves 1.128 perplexity in our testing which is equivalent to q8.*
13
  | Quantization | Perplexity |
 
23
  * Built with a modified version of [MLX](https://github.com/ml-explore/mlx) 0.26
24
  * Memory usage: ~95 GB
25
  * Expect ~60 tokens/s
26
+ * For more details see [demonstration video](https://youtu.be/mlpFG8e_fLw) or visit [OpenAI gpt-oss-20b](https://huggingface.co/openai/gpt-oss-120b).