lbourdois commited on
Commit
8bf2c12
·
verified ·
1 Parent(s): 9936a94

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +71 -57
README.md CHANGED
@@ -1,57 +1,71 @@
1
- ---
2
- library_name: mlc-llm
3
- base_model: Qwen/Qwen2.5-0.5B-Instruct
4
- tags:
5
- - mlc-llm
6
- - web-llm
7
- ---
8
-
9
- # Qwen2.5-0.5B-Instruct-q0f16-MLC
10
-
11
- This is the [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) model in MLC format `q0f16`.
12
- The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
13
-
14
- ## Example Usage
15
-
16
- Here are some examples of using this model in MLC LLM.
17
- Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
18
-
19
- ### Chat
20
-
21
- In command line, run
22
- ```bash
23
- mlc_llm chat HF://mlc-ai/Qwen2.5-0.5B-Instruct-q0f16-MLC
24
- ```
25
-
26
- ### REST Server
27
-
28
- In command line, run
29
- ```bash
30
- mlc_llm serve HF://mlc-ai/Qwen2.5-0.5B-Instruct-q0f16-MLC
31
- ```
32
-
33
- ### Python API
34
-
35
- ```python
36
- from mlc_llm import MLCEngine
37
-
38
- # Create engine
39
- model = "HF://mlc-ai/Qwen2.5-0.5B-Instruct-q0f16-MLC"
40
- engine = MLCEngine(model)
41
-
42
- # Run chat completion in OpenAI API.
43
- for response in engine.chat.completions.create(
44
- messages=[{"role": "user", "content": "What is the meaning of life?"}],
45
- model=model,
46
- stream=True,
47
- ):
48
- for choice in response.choices:
49
- print(choice.delta.content, end="", flush=True)
50
- print("\n")
51
-
52
- engine.terminate()
53
- ```
54
-
55
- ## Documentation
56
-
57
- For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: mlc-llm
3
+ base_model: Qwen/Qwen2.5-0.5B-Instruct
4
+ tags:
5
+ - mlc-llm
6
+ - web-llm
7
+ language:
8
+ - zho
9
+ - eng
10
+ - fra
11
+ - spa
12
+ - por
13
+ - deu
14
+ - ita
15
+ - rus
16
+ - jpn
17
+ - kor
18
+ - vie
19
+ - tha
20
+ - ara
21
+ ---
22
+
23
+ # Qwen2.5-0.5B-Instruct-q0f16-MLC
24
+
25
+ This is the [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) model in MLC format `q0f16`.
26
+ The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
27
+
28
+ ## Example Usage
29
+
30
+ Here are some examples of using this model in MLC LLM.
31
+ Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
32
+
33
+ ### Chat
34
+
35
+ In command line, run
36
+ ```bash
37
+ mlc_llm chat HF://mlc-ai/Qwen2.5-0.5B-Instruct-q0f16-MLC
38
+ ```
39
+
40
+ ### REST Server
41
+
42
+ In command line, run
43
+ ```bash
44
+ mlc_llm serve HF://mlc-ai/Qwen2.5-0.5B-Instruct-q0f16-MLC
45
+ ```
46
+
47
+ ### Python API
48
+
49
+ ```python
50
+ from mlc_llm import MLCEngine
51
+
52
+ # Create engine
53
+ model = "HF://mlc-ai/Qwen2.5-0.5B-Instruct-q0f16-MLC"
54
+ engine = MLCEngine(model)
55
+
56
+ # Run chat completion in OpenAI API.
57
+ for response in engine.chat.completions.create(
58
+ messages=[{"role": "user", "content": "What is the meaning of life?"}],
59
+ model=model,
60
+ stream=True,
61
+ ):
62
+ for choice in response.choices:
63
+ print(choice.delta.content, end="", flush=True)
64
+ print("\n")
65
+
66
+ engine.terminate()
67
+ ```
68
+
69
+ ## Documentation
70
+
71
+ For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).