Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ system prompt: "You are Fuse01. You answer very direct brief and concise"
|
|
15 |
|
16 |
prompt: "Write a quick sort in C++"
|
17 |
|
18 |
-
Temp: 0
|
19 |
|
20 |
|
21 |
Try this model & quant in roo coder, starting in Architect Mode and letting it auto switch to Code Mode.... it actually spits decent code for small projects with multiple files.
|
@@ -25,11 +25,13 @@ All the smaller quants I tested shit the bed
|
|
25 |
|
26 |
All the smaller models I tested shit the bed
|
27 |
|
28 |
-
So far (Feb 20, 2025) this is the only model & quant that
|
29 |
|
30 |
|
31 |
Huge thanks to all who helped Macs get this far!
|
32 |
|
|
|
|
|
33 |
# bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-Q8
|
34 |
|
35 |
The Model [bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-Q8](https://huggingface.co/bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-Q8) was
|
|
|
15 |
|
16 |
prompt: "Write a quick sort in C++"
|
17 |
|
18 |
+
Context: 131072, Temp: 0
|
19 |
|
20 |
|
21 |
Try this model & quant in roo coder, starting in Architect Mode and letting it auto switch to Code Mode.... it actually spits decent code for small projects with multiple files.
|
|
|
25 |
|
26 |
All the smaller models I tested shit the bed
|
27 |
|
28 |
+
So far (Feb 20, 2025) this is the only model & quant that runs fast on Mac, spits decent code AND works with Speculative Decoding.
|
29 |
|
30 |
|
31 |
Huge thanks to all who helped Macs get this far!
|
32 |
|
33 |
+
Oh, here is the draft model [DeepScaleR-1.5B-Preview-Q8](https://huggingface.co/mlx-community/DeepScaleR-1.5B-Preview-Q8)
|
34 |
+
|
35 |
# bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-Q8
|
36 |
|
37 |
The Model [bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-Q8](https://huggingface.co/bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-Q8) was
|