azzzacs nielsr HF Staff commited on
Commit
3e0281a
·
verified ·
1 Parent(s): 6debec4

Improve model card: Add pipeline tag, library name, and GitHub link (#1)

Browse files

- Improve model card: Add pipeline tag, library name, and GitHub link (68caf65015ed76e04f61affb41f603b5f7aa44ec)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +12 -5
README.md CHANGED
@@ -1,11 +1,13 @@
1
  ---
2
- license: mit
3
- datasets:
4
- - open-r1/codeforces-cots
5
  base_model:
6
  - deepseek-ai/DeepSeek-R1-Distill-Llama-8B
 
 
 
7
  tags:
8
  - code
 
 
9
  ---
10
 
11
  # Paper Page
@@ -18,6 +20,9 @@ tags:
18
 
19
  This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
20
 
 
 
 
21
  # 🧠 Reasoning Mode
22
 
23
  We recommend **explicitly activating reasoning mode by inserting ```<think>``` in the prompt**.
@@ -30,8 +35,10 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
30
  tokenizer = AutoTokenizer.from_pretrained("azzzacs/LogicCoder-8B", trust_remote_code=True)
31
  model = AutoModelForCausalLM.from_pretrained("azzzacs/LogicCoder-8B", device_map="auto", trust_remote_code=True).eval()
32
 
33
- message = [{"role": "user", "content": "Please write a Python quick sort algorithm.\n"}]
34
- prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) + "<|Assistant|><think>\n"
 
 
35
 
36
  model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
37
 
 
1
  ---
 
 
 
2
  base_model:
3
  - deepseek-ai/DeepSeek-R1-Distill-Llama-8B
4
+ datasets:
5
+ - open-r1/codeforces-cots
6
+ license: mit
7
  tags:
8
  - code
9
+ pipeline_tag: text-generation
10
+ library_name: transformers
11
  ---
12
 
13
  # Paper Page
 
20
 
21
  This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
22
 
23
+ ## Code
24
+ For the official implementation, please refer to the [ASAP GitHub repository](https://github.com/azzzacs/ASAP).
25
+
26
  # 🧠 Reasoning Mode
27
 
28
  We recommend **explicitly activating reasoning mode by inserting ```<think>``` in the prompt**.
 
35
  tokenizer = AutoTokenizer.from_pretrained("azzzacs/LogicCoder-8B", trust_remote_code=True)
36
  model = AutoModelForCausalLM.from_pretrained("azzzacs/LogicCoder-8B", device_map="auto", trust_remote_code=True).eval()
37
 
38
+ message = [{"role": "user", "content": "Please write a Python quick sort algorithm.
39
+ "}]
40
+ prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) + "<|Assistant|><think>
41
+ "
42
 
43
  model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
44