bobig commited on
Commit
b6ec9f3
·
verified ·
1 Parent(s): 0bc7f68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,7 +7,7 @@ base_model: FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview
7
 
8
  # bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-4.5bit
9
 
10
- The Model [bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-4.5bit-MLX-LM-21.4](https://huggingface.co/bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-4.5bit-MLX-LM-21.4) was
11
  converted to MLX format from [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview)
12
  using mlx-lm version **0.21.4**.
13
 
@@ -20,7 +20,7 @@ pip install mlx-lm
20
  ```python
21
  from mlx_lm import load, generate
22
 
23
- model, tokenizer = load("bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-4.5bit-MLX-LM-21.4")
24
 
25
  prompt = "hello"
26
 
 
7
 
8
  # bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-4.5bit
9
 
10
+ The Model [bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-4.5bit](https://huggingface.co/bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-4.5bit) was
11
  converted to MLX format from [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview)
12
  using mlx-lm version **0.21.4**.
13
 
 
20
  ```python
21
  from mlx_lm import load, generate
22
 
23
+ model, tokenizer = load("bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-4.5bit")
24
 
25
  prompt = "hello"
26