Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,37 @@ base_model:
|
|
6 |
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
|
7 |
tags:
|
8 |
- code
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
|
7 |
tags:
|
8 |
- code
|
9 |
+
---
|
10 |
+
|
11 |
+
# LogicCoder-8B
|
12 |
+
|
13 |
+
**LogicCoder-8B** is a 8B-parameter language model fine-tuned for code generation tasks. It is based on the DeepSeek-R1-Distill-Llama-8B model and trained on a Python subset of the open-r1/codeforces-cots dataset.
|
14 |
+
|
15 |
+
This model was fine-tuned on pruned CoTS examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
|
16 |
+
|
17 |
+
# 🧠 Reasoning Mode
|
18 |
+
|
19 |
+
We recommend **explicitly activating reasoning mode by inserting ```<think>``` in the prompt**.
|
20 |
+
|
21 |
+
# 🔧 Usage
|
22 |
+
|
23 |
+
```python
|
24 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
25 |
+
|
26 |
+
tokenizer = AutoTokenizer.from_pretrained("azzzacs/LogicCoder-8B", trust_remote_code=True)
|
27 |
+
model = AutoModelForCausalLM.from_pretrained("azzzacs/LogicCoder-8B", device_map="auto", trust_remote_code=True).eval()
|
28 |
+
|
29 |
+
message = [{"role": "user", "content": "Please write a Python quick sort algorithm.\n"}]
|
30 |
+
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) + "<|Assistant|><think>\n"
|
31 |
+
|
32 |
+
model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
|
33 |
+
|
34 |
+
outputs = model.generate(
|
35 |
+
model_inputs.input_ids,
|
36 |
+
max_new_tokens=4096,
|
37 |
+
do_sample=False,
|
38 |
+
eos_token_id=tokenizer.eos_token_id
|
39 |
+
)
|
40 |
+
|
41 |
+
print(tokenizer.decode(outputs[0][len(model_inputs.input_ids[0]):], skip_special_tokens=False))
|
42 |
+
```
|