Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,9 @@ tags:
|
|
7 |
|
8 |
13.7 TPS
|
9 |
|
10 |
-
27.1 TPS with Speculative decoding in LMstudio.
|
|
|
|
|
11 |
|
12 |
Macbook M4 Max: high power
|
13 |
|
@@ -20,7 +22,7 @@ Context: 131072, Temp: 0
|
|
20 |
|
21 |
|
22 |
Try this model & quant in Roo Code, starting in Architect Mode and letting it auto switch to Code Mode.... it actually spits decent code for small projects with multiple files.
|
23 |
-
|
24 |
|
25 |
All the smaller quants I tested shit the bed
|
26 |
|
|
|
7 |
|
8 |
13.7 TPS
|
9 |
|
10 |
+
27.1 TPS with Speculative decoding in LMstudio.
|
11 |
+
|
12 |
+
Draft model: [DeepScaleR-1.5B-Preview-Q8](https://huggingface.co/mlx-community/DeepScaleR-1.5B-Preview-Q8)
|
13 |
|
14 |
Macbook M4 Max: high power
|
15 |
|
|
|
22 |
|
23 |
|
24 |
Try this model & quant in Roo Code, starting in Architect Mode and letting it auto switch to Code Mode.... it actually spits decent code for small projects with multiple files.
|
25 |
+
Near Claude Sonnet level for small projects. It actually stays reasonably stable even with Roo Code's huge 10k system prompt. Still shits the bed for big projects but better after adding roo-code-memory-bank.
|
26 |
|
27 |
All the smaller quants I tested shit the bed
|
28 |
|