nielsr HF Staff commited on
Commit
68caf65
·
verified ·
1 Parent(s): 6debec4

Improve model card: Add pipeline tag, library name, and GitHub link

Browse files

This PR enhances the model card by:
- Adding the `pipeline_tag: text-generation` to ensure better discoverability on the Hub (https://huggingface.co/models?pipeline_tag=text-generation).
- Specifying the `library_name: transformers` to indicate compatibility with the Hugging Face Transformers library.
- Including a direct link to the GitHub repository (https://github.com/azzzacs/ASAP) for easy access to the code.

Please review and merge this PR.

Files changed (1) hide show
  1. README.md +12 -5
README.md CHANGED
@@ -1,11 +1,13 @@
1
  ---
2
- license: mit
3
- datasets:
4
- - open-r1/codeforces-cots
5
  base_model:
6
  - deepseek-ai/DeepSeek-R1-Distill-Llama-8B
 
 
 
7
  tags:
8
  - code
 
 
9
  ---
10
 
11
  # Paper Page
@@ -18,6 +20,9 @@ tags:
18
 
19
  This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
20
 
 
 
 
21
  # 🧠 Reasoning Mode
22
 
23
  We recommend **explicitly activating reasoning mode by inserting ```<think>``` in the prompt**.
@@ -30,8 +35,10 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
30
  tokenizer = AutoTokenizer.from_pretrained("azzzacs/LogicCoder-8B", trust_remote_code=True)
31
  model = AutoModelForCausalLM.from_pretrained("azzzacs/LogicCoder-8B", device_map="auto", trust_remote_code=True).eval()
32
 
33
- message = [{"role": "user", "content": "Please write a Python quick sort algorithm.\n"}]
34
- prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) + "<|Assistant|><think>\n"
 
 
35
 
36
  model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
37
 
 
1
  ---
 
 
 
2
  base_model:
3
  - deepseek-ai/DeepSeek-R1-Distill-Llama-8B
4
+ datasets:
5
+ - open-r1/codeforces-cots
6
+ license: mit
7
  tags:
8
  - code
9
+ pipeline_tag: text-generation
10
+ library_name: transformers
11
  ---
12
 
13
  # Paper Page
 
20
 
21
  This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
22
 
23
+ ## Code
24
+ For the official implementation, please refer to the [ASAP GitHub repository](https://github.com/azzzacs/ASAP).
25
+
26
  # 🧠 Reasoning Mode
27
 
28
  We recommend **explicitly activating reasoning mode by inserting ```<think>``` in the prompt**.
 
35
  tokenizer = AutoTokenizer.from_pretrained("azzzacs/LogicCoder-8B", trust_remote_code=True)
36
  model = AutoModelForCausalLM.from_pretrained("azzzacs/LogicCoder-8B", device_map="auto", trust_remote_code=True).eval()
37
 
38
+ message = [{"role": "user", "content": "Please write a Python quick sort algorithm.
39
+ "}]
40
+ prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) + "<|Assistant|><think>
41
+ "
42
 
43
  model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
44