metadata
license: apache-2.0
task_categories:
- text-generation
tags:
- reasoning
- math
- code
- reinforcement-learning
Klear-Reasoner Code RL Dataset
This dataset is a cleaned version of the RL data from the rllm project, part of which was used to train KlearReasoner code RL. This data is associated with the paper Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization.
For more details on the Klear-Reasoner project, including the model and training procedures, please refer to the official GitHub repository: https://github.com/suu990901/KlearReasoner
Dataset Structure
The data within this repository follows a specific format for use in training RL models for code generation tasks. An example of a single code entry is as follows:
{"hash": "47c43857280be8a7557cc36b998b3012", "ability": "code", "data_source": "coder1_longcot", "prompt": [{"content": "You are an expert Python programmer. You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests.
Takahashi is planning to eat N dishes.
The i-th dish he plans to eat is sweet if S_i = sweet, and salty if S_i = salty.
If he eats two sweet dishes consecutively, he will feel sick and be unable to eat any more dishes.
Determine whether he can eat all the dishes...", "role": "user"}], "reward_model": {"ground_truth": "...", "style": "rule"}}
Here, the data_source
field is set to "coder1_longcot". This field affects the choice of verifier during training.