--- license: apache-2.0 base_model: - all-hands/openhands-lm-32b-v0.1 pipeline_tag: text-generation tags: - mlx --- # rdsm/openhands-lm-32b-v0.1-mlx-mixed-3_6bit MLX Mixed Quantization 3/6bits, using --quant-predicate mixed_3_6 ~4bpw The Model [rdsm/openhands-lm-32b-v0.1-mlx-mixed-3_6bit](https://huggingface.co/rdsm/openhands-lm-32b-v0.1-mlx-mixed-3_6bit) was converted to MLX format from [all-hands/openhands-lm-32b-v0.1](https://huggingface.co/all-hands/openhands-lm-32b-v0.1) using mlx-lm version **0.22.2**. Note: Qwen 2.5 0.5b / 1.5b Instruct seems to work fine as draft models.