Improve dataset card: Add paper link, task category, tags, code link, and data format
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,4 +1,30 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
task_categories:
|
4 |
+
- text-generation
|
5 |
+
tags:
|
6 |
+
- reasoning
|
7 |
+
- math
|
8 |
+
- code
|
9 |
+
- reinforcement-learning
|
10 |
---
|
11 |
+
|
12 |
+
# Klear-Reasoner Code RL Dataset
|
13 |
+
|
14 |
+
This dataset is a cleaned version of the RL data from the [rllm project](https://github.com/agentica-project/rllm), part of which was used to train KlearReasoner code RL. This data is associated with the paper [Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization](https://huggingface.co/papers/2508.07629).
|
15 |
+
|
16 |
+
For more details on the Klear-Reasoner project, including the model and training procedures, please refer to the official GitHub repository: [https://github.com/suu990901/KlearReasoner](https://github.com/suu990901/KlearReasoner)
|
17 |
+
|
18 |
+
## Dataset Structure
|
19 |
+
|
20 |
+
The data within this repository follows a specific format for use in training RL models for code generation tasks. An example of a single code entry is as follows:
|
21 |
+
|
22 |
+
```json
|
23 |
+
{"hash": "47c43857280be8a7557cc36b998b3012", "ability": "code", "data_source": "coder1_longcot", "prompt": [{"content": "You are an expert Python programmer. You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests.
|
24 |
+
|
25 |
+
Takahashi is planning to eat N dishes.
|
26 |
+
The i-th dish he plans to eat is sweet if S_i = sweet, and salty if S_i = salty.
|
27 |
+
If he eats two sweet dishes consecutively, he will feel sick and be unable to eat any more dishes.
|
28 |
+
Determine whether he can eat all the dishes...", "role": "user"}], "reward_model": {"ground_truth": "...", "style": "rule"}}
|
29 |
+
```
|
30 |
+
Here, the `data_source` field is set to "coder1_longcot". This field affects the choice of verifier during training.
|