zzqsmall thinkthinking commited on
Commit
649f8b9
·
verified ·
1 Parent(s): 3d203b4

Update README.md (#1)

Browse files

- Update README.md (fd0d47c445e285b7f5ee5ccbc6381635b5d4a4d0)


Co-authored-by: Ye Zhenjie <[email protected]>

Files changed (1) hide show
  1. README.md +66 -30
README.md CHANGED
@@ -1,36 +1,35 @@
1
  ---
2
  license: mit
3
  base_model:
4
- - inclusionAI/Ling-flash-base-2.0
5
  pipeline_tag: text-generation
6
  library_name: transformers
7
  ---
8
 
9
-
10
-
11
  <p align="center">
12
  <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
13
  <p>
14
 
15
- <p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
16
-
17
 
18
  ## Introduction
19
 
20
- Today, __Ling-flash-2.0__ is officially open-sourced! 🚀
21
- Following the release of the __language model [Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0)__ and the __thinking model [Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0)__, we are now open-sourcing the third MoE LLM under the __Ling 2.0 architecture: Ling-flash-2.0__, a language model with __100B total parameters__ and __6.1B activated parameters (4.8B non-embedding)__.
22
- Trained on __20T+ tokens of high-quality data__, together with __supervised fine-tuning__ and __multi-stage reinforcement learning__, Ling-flash-2.0 achieves __SOTA performance among dense models under 40B parameters__, despite activating only ~6B parameters. Compared to MoE models with larger activation/total parameters, it also demonstrates strong competitiveness. Notably, it delivers outstanding performance in __complex reasoning, code generation, and frontend development__.
23
 
24
  ### Powerful Complex Reasoning Abilities
25
 
26
  We conducted a comprehensive evaluation of Ling-flash-2.0’s reasoning capabilities, reporting strong results on representative benchmarks:
27
- * __Multi-disciplinary knowledge reasoning__: GPQA-Diamond, MMLU-Pro
28
- * __Advanced mathematical reasoning__: AIME 2025, Omni-MATH, OptMATH (advanced mathematical optimization tasks)
29
- * __Challenging code generation__: LiveCodeBench v6, CodeForces-Elo
30
- * __Logical reasoning__: KOR-Bench, ARC-Prize
31
- * __Key regulated industries (Finance, Healthcare)__: FinanceReasoning, HealthBench
32
 
33
- Compared with __dense models under 40B__ (e.g., Qwen3-32B-Non-Thinking, Seed-OSS-36B-Instruct (think budget=0)) and __larger-activation/total-parameter MoE models__ (e.g., Hunyuan-A13B-Instruct, GPT-OSS-120B/low), __Ling-flash-2.0__ demonstrates stronger complex reasoning power. Moreover, it shows high competitiveness on __creative tasks__ (Creative Writing v3).
 
 
 
 
 
 
 
34
  <p align="center">
35
  <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/zxAvQ7QtrAwAAAAAQqAAAAgADkZ7AQFr/fmt.webp"/>
36
  <p>
@@ -45,11 +44,11 @@ Compared with __dense models under 40B__ (e.g., Qwen3-32B-Non-Thinking, Seed-OSS
45
  <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/fMdiQZqYKSAAAAAAVdAAAAgADkZ7AQFr/fmt.avif"/>
46
  <p>
47
 
48
- Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation-ratio MoE architecture__, optimized across multiple design choices: expert granularity, shared-expert ratio, attention balance, __aux-loss-free + sigmoid routing strategy__, MTP layers, QK-Norm, Partial-RoPE, and more. These refinements enable __small-activation MoE__ models to achieve __7× efficiency gains__ over equivalent dense architectures.
49
- In other words, with just __6.1B activated parameters (4.8B non-embedding)__, __Ling-flash-2.0__ can match the performance of ~40B dense models. Thanks to its small activation size, it also delivers major inference speed advantages:
50
- * On __H20 hardware__, Ling-flash-2.0 achieves __200+ tokens/s__, offering __3× speedups__ compared to 36B dense models in everyday use.
51
- * With __YaRN extrapolation__, it supports __128K context length__, and as output length grows, its relative speedup can reach __7× or more__.
52
 
 
 
53
 
54
  <p align="center">
55
  <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/oR9UTY7S0QgAAAAAgKAAAAgADkZ7AQFr/original"/>
@@ -59,25 +58,57 @@ In other words, with just __6.1B activated parameters (4.8B non-embedding)__, __
59
  <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/Hid1RrgsCUAAAAAAQYAAAAgADkZ7AQFr/fmt.webp"/>
60
  <p>
61
 
62
-
63
  ## Model Downloads
64
 
65
  You can download the following table to see the various stage of Ling-flash-2.0 models. If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
66
 
67
  <center>
68
 
69
- | **Model** | **Context Length** | **Download** |
70
- |:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
71
- | Ling-flash-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-flash-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-flash-base-2.0) |
72
- | Ling-flash-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-flash-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-flash-2.0) |
73
 
74
  </center>
75
 
76
  Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
77
 
78
-
79
  ## Quickstart
80
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
  ### 🤗 Hugging Face Transformers
82
 
83
  Here is a code snippet to show you how to use the chat model with `transformers`:
@@ -177,7 +208,9 @@ vllm serve inclusionAI/Ling-flash-2.0 \
177
  ```
178
 
179
  To handle long context in vLLM using YaRN, we need to follow these two steps:
 
180
  1. Add a `rope_scaling` field to the model's `config.json` file, for example:
 
181
  ```json
182
  {
183
  ...,
@@ -188,24 +221,29 @@ To handle long context in vLLM using YaRN, we need to follow these two steps:
188
  }
189
  }
190
  ```
 
191
  2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
192
 
193
  For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
194
 
195
-
196
  ### SGLang
197
 
198
  #### Environment Preparation
199
 
200
  We will later submit our model to SGLang official release, now we can prepare the environment following steps:
 
201
  ```shell
202
  pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
203
  ```
 
204
  You can use docker image as well:
 
205
  ```shell
206
  docker pull lmsysorg/sglang:v0.5.2rc0-cu126
207
  ```
 
208
  Then you should apply patch to sglang installation:
 
209
  ```shell
210
  # patch command is needed, run `yum install -y patch` if needed
211
  patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
@@ -213,9 +251,10 @@ patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__fil
213
 
214
  #### Run Inference
215
 
216
- BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
217
 
218
  - Start server:
 
219
  ```shell
220
  python -m sglang.launch_server \
221
  --model-path $MODLE_PATH \
@@ -223,6 +262,7 @@ python -m sglang.launch_server \
223
  --trust-remote-code \
224
  --attention-backend fa3
225
  ```
 
226
  MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
227
  to start command.
228
 
@@ -236,8 +276,6 @@ curl -s http://localhost:${PORT}/v1/chat/completions \
236
 
237
  More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
238
 
239
-
240
-
241
  ### Finetuning
242
 
243
  We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md).
@@ -245,5 +283,3 @@ We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory
245
  ## License
246
 
247
  This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
248
-
249
-
 
1
  ---
2
  license: mit
3
  base_model:
4
+ - inclusionAI/Ling-flash-base-2.0
5
  pipeline_tag: text-generation
6
  library_name: transformers
7
  ---
8
 
 
 
9
  <p align="center">
10
  <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
11
  <p>
12
 
13
+ <p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp🐙 <a href="https://zenmux.ai/inclusionai/ling-flash-2.0">ChatNow</a></p>
 
14
 
15
  ## Introduction
16
 
17
+ Today, **Ling-flash-2.0** is officially open-sourced! 🚀
18
+ Following the release of the **language model [Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0)** and the **thinking model [Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0)**, we are now open-sourcing the third MoE LLM under the **Ling 2.0 architecture: Ling-flash-2.0**, a language model with **100B total parameters** and **6.1B activated parameters (4.8B non-embedding)**.
19
+ Trained on **20T+ tokens of high-quality data**, together with **supervised fine-tuning** and **multi-stage reinforcement learning**, Ling-flash-2.0 achieves **SOTA performance among dense models under 40B parameters**, despite activating only ~6B parameters. Compared to MoE models with larger activation/total parameters, it also demonstrates strong competitiveness. Notably, it delivers outstanding performance in **complex reasoning, code generation, and frontend development**.
20
 
21
  ### Powerful Complex Reasoning Abilities
22
 
23
  We conducted a comprehensive evaluation of Ling-flash-2.0’s reasoning capabilities, reporting strong results on representative benchmarks:
 
 
 
 
 
24
 
25
+ - **Multi-disciplinary knowledge reasoning**: GPQA-Diamond, MMLU-Pro
26
+ - **Advanced mathematical reasoning**: AIME 2025, Omni-MATH, OptMATH (advanced mathematical optimization tasks)
27
+ - **Challenging code generation**: LiveCodeBench v6, CodeForces-Elo
28
+ - **Logical reasoning**: KOR-Bench, ARC-Prize
29
+ - **Key regulated industries (Finance, Healthcare)**: FinanceReasoning, HealthBench
30
+
31
+ Compared with **dense models under 40B** (e.g., Qwen3-32B-Non-Thinking, Seed-OSS-36B-Instruct (think budget=0)) and **larger-activation/total-parameter MoE models** (e.g., Hunyuan-A13B-Instruct, GPT-OSS-120B/low), **Ling-flash-2.0** demonstrates stronger complex reasoning power. Moreover, it shows high competitiveness on **creative tasks** (Creative Writing v3).
32
+
33
  <p align="center">
34
  <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/zxAvQ7QtrAwAAAAAQqAAAAgADkZ7AQFr/fmt.webp"/>
35
  <p>
 
44
  <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/fMdiQZqYKSAAAAAAVdAAAAgADkZ7AQFr/fmt.avif"/>
45
  <p>
46
 
47
+ Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a **1/32 activation-ratio MoE architecture**, optimized across multiple design choices: expert granularity, shared-expert ratio, attention balance, **aux-loss-free + sigmoid routing strategy**, MTP layers, QK-Norm, Partial-RoPE, and more. These refinements enable **small-activation MoE** models to achieve **7× efficiency gains** over equivalent dense architectures.
48
+ In other words, with just **6.1B activated parameters (4.8B non-embedding)**, **Ling-flash-2.0** can match the performance of ~40B dense models. Thanks to its small activation size, it also delivers major inference speed advantages:
 
 
49
 
50
+ - On **H20 hardware**, Ling-flash-2.0 achieves **200+ tokens/s**, offering **3× speedups** compared to 36B dense models in everyday use.
51
+ - With **YaRN extrapolation**, it supports **128K context length**, and as output length grows, its relative speedup can reach **7× or more**.
52
 
53
  <p align="center">
54
  <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/oR9UTY7S0QgAAAAAgKAAAAgADkZ7AQFr/original"/>
 
58
  <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/Hid1RrgsCUAAAAAAQYAAAAgADkZ7AQFr/fmt.webp"/>
59
  <p>
60
 
 
61
  ## Model Downloads
62
 
63
  You can download the following table to see the various stage of Ling-flash-2.0 models. If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
64
 
65
  <center>
66
 
67
+ | **Model** | **Context Length** | **Download** |
68
+ | :-----------------: | :----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------: |
69
+ | Ling-flash-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-flash-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-flash-base-2.0) |
70
+ | Ling-flash-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-flash-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-flash-2.0) |
71
 
72
  </center>
73
 
74
  Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
75
 
 
76
  ## Quickstart
77
 
78
+ ### 🚀 Try Online
79
+
80
+ You can experience Ling-flash-2.0 online at: [ZenMux](https://zenmux.ai/inclusionai/ling-flash-2.0)
81
+
82
+ ### 🔌 API Usage
83
+
84
+ You can also use Ling-flash-2.0 through API calls:
85
+
86
+ ```python
87
+ from openai import OpenAI
88
+
89
+ # 1. Initialize the OpenAI client
90
+ client = OpenAI(
91
+ # 2. Point the base URL to the ZenMux endpoint
92
+ base_url="https://zenmux.ai/api/v1",
93
+ # 3. Replace with the API Key from your ZenMux user console
94
+ api_key="<your ZENMUX_API_KEY>",
95
+ )
96
+
97
+ # 4. Make a request
98
+ completion = client.chat.completions.create(
99
+ # 5. Specify the model to use in the format "provider/model-name"
100
+ model="inclusionai/ling-flash-2.0",
101
+ messages=[
102
+ {
103
+ "role": "user",
104
+ "content": "What is the meaning of life?"
105
+ }
106
+ ]
107
+ )
108
+
109
+ print(completion.choices[0].message.content)
110
+ ```
111
+
112
  ### 🤗 Hugging Face Transformers
113
 
114
  Here is a code snippet to show you how to use the chat model with `transformers`:
 
208
  ```
209
 
210
  To handle long context in vLLM using YaRN, we need to follow these two steps:
211
+
212
  1. Add a `rope_scaling` field to the model's `config.json` file, for example:
213
+
214
  ```json
215
  {
216
  ...,
 
221
  }
222
  }
223
  ```
224
+
225
  2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
226
 
227
  For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
228
 
 
229
  ### SGLang
230
 
231
  #### Environment Preparation
232
 
233
  We will later submit our model to SGLang official release, now we can prepare the environment following steps:
234
+
235
  ```shell
236
  pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
237
  ```
238
+
239
  You can use docker image as well:
240
+
241
  ```shell
242
  docker pull lmsysorg/sglang:v0.5.2rc0-cu126
243
  ```
244
+
245
  Then you should apply patch to sglang installation:
246
+
247
  ```shell
248
  # patch command is needed, run `yum install -y patch` if needed
249
  patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
 
251
 
252
  #### Run Inference
253
 
254
+ BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
255
 
256
  - Start server:
257
+
258
  ```shell
259
  python -m sglang.launch_server \
260
  --model-path $MODLE_PATH \
 
262
  --trust-remote-code \
263
  --attention-backend fa3
264
  ```
265
+
266
  MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
267
  to start command.
268
 
 
276
 
277
  More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
278
 
 
 
279
  ### Finetuning
280
 
281
  We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md).
 
283
  ## License
284
 
285
  This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).