File size: 589 Bytes
8c4f11d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
---
license: apache-2.0
base_model:
- all-hands/openhands-lm-32b-v0.1
pipeline_tag: text-generation
tags:
- mlx
---
# rdsm/openhands-lm-32b-v0.1-mlx-mixed-3_6bit
MLX Mixed Quantization 3/6bits, using --quant-predicate mixed_3_6 ~4bpw
The Model [rdsm/openhands-lm-32b-v0.1-mlx-mixed-3_6bit](https://huggingface.co/rdsm/openhands-lm-32b-v0.1-mlx-mixed-3_6bit) was
converted to MLX format from [all-hands/openhands-lm-32b-v0.1](https://huggingface.co/all-hands/openhands-lm-32b-v0.1)
using mlx-lm version **0.22.2**.
Note: Qwen 2.5 0.5b / 1.5b Instruct seems to work fine as draft models. |