|
--- |
|
base_model: tencent/Hunyuan-0.5B-Instruct |
|
library_name: mlx |
|
pipeline_tag: text-generation |
|
tags: |
|
- mlx |
|
--- |
|
|
|
# Hunyuan-0.5B-Instruct-dwq4-mlx |
|
|
|
this model comes in Strarbery flavor: |
|
|
|
```bash |
|
How many R are in Strawberry |
|
|
|
Hunyuan-0.5B-Instruct-dwq4-mlx |
|
<think> |
|
Okay, let's see. The user is asking how many "R" are in "Strawberry." First, I need to remember the spelling of "Strawberry." Let me recall: "Strawberry" is spelled S-T-R-A-R-B-E-R-Y. Let's break it down letter by letter. |
|
|
|
Starting with the first letter: S. Then T. Next is R. Then A. Then R again. Then B. Then E. Then R. Then Y. |
|
|
|
So let's list them out: S, T, R, A, R, B, E, R, Y. Now, I need to count the "R" characters. Let's go through each letter: |
|
|
|
1. S (no) |
|
2. T (no) |
|
3. R (yes, count 1) |
|
4. A (no) |
|
5. R (yes, count 2) |
|
6. B (no) |
|
7. E (no) |
|
8. R (yes, count 3) |
|
9. Y (no) |
|
|
|
So that's three "R"s. Let me double-check to make sure I didn't miss any. "Strawberry" is S-T-R-A-R-B-E-R-Y. |
|
|
|
Let's spell it again |
|
``` |
|
|
|
Right, for the wrong reason |
|
|
|
538.516 tok/sec |
|
|
|
|
|
This model [Hunyuan-0.5B-Instruct-dwq4-mlx](https://huggingface.co/Hunyuan-0.5B-Instruct-dwq4-mlx) was |
|
converted to MLX format from [tencent/Hunyuan-0.5B-Instruct](https://huggingface.co/tencent/Hunyuan-0.5B-Instruct) |
|
using mlx-lm version **0.26.3**. |
|
|
|
## Use with mlx |
|
|
|
```bash |
|
pip install mlx-lm |
|
``` |
|
|
|
```python |
|
from mlx_lm import load, generate |
|
|
|
model, tokenizer = load("Hunyuan-0.5B-Instruct-dwq4-mlx") |
|
|
|
prompt = "hello" |
|
|
|
if tokenizer.chat_template is not None: |
|
messages = [{"role": "user", "content": prompt}] |
|
prompt = tokenizer.apply_chat_template( |
|
messages, add_generation_prompt=True |
|
) |
|
|
|
response = generate(model, tokenizer, prompt=prompt, verbose=True) |
|
``` |
|
|