Update README.md
Browse files
README.md
CHANGED
@@ -8,9 +8,13 @@ tags:
|
|
8 |
- code
|
9 |
---
|
10 |
|
|
|
|
|
|
|
|
|
11 |
# LogicCoder-8B
|
12 |
|
13 |
-
**LogicCoder-8B** is
|
14 |
|
15 |
This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
|
16 |
|
|
|
8 |
- code
|
9 |
---
|
10 |
|
11 |
+
# Paper Page
|
12 |
+
|
13 |
+
[**Pruning the Unsurprising: Efficient Code Reasoning via First-Token Surprisal.**](https://arxiv.org/abs/2508.05988)
|
14 |
+
|
15 |
# LogicCoder-8B
|
16 |
|
17 |
+
**LogicCoder-8B** is an 8B-parameter language model fine-tuned for code generation tasks. It is based on the DeepSeek-R1-Distill-Llama-8B model and trained on a Python subset of the open-r1/codeforces-cots dataset.
|
18 |
|
19 |
This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
|
20 |
|