Utkarsh524 commited on
Commit
3f5c3b2
·
verified ·
1 Parent(s): 975dad4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -14
README.md CHANGED
@@ -1,33 +1,32 @@
1
  ---
 
2
  license: apache-2.0
3
  language: c++
4
  tags:
5
- - code-generation
6
- - codellama
7
- - peft
8
- - unit-tests
9
- - causal-lm
10
- - text-generation
 
11
  base_model: codellama/CodeLlama-7b-hf
12
  model_type: llama
13
  pipeline_tag: text-generation
14
  ---
15
-
16
  # 🧪 CodeLLaMA Unit Test Generator — Full Merged Model (v2)
17
 
18
- This is a **merged model** that combines [`codellama/CodeLlama-7b-hf`](https://huggingface.co/codellama/CodeLlama-7b-hf) with a LoRA adapter fine-tuned on embedded C/C++ code and high-quality unit tests using GoogleTest and CppUTest. This version includes enhanced formatting, stop tokens, and test cleanup mechanisms.
 
 
19
 
20
- > ✅ Trained to generate only test cases, no headers, no `main()`, and uses `// END_OF_TESTS` token to denote completion.
21
 
22
  ---
23
 
24
  ## 🎯 Use Cases
25
 
26
- - 🧪 Generate comprehensive unit tests for embedded C/C++ functions
27
- -Focus on edge cases, boundaries, error handling
28
- - ⚠️ Ensure MISRA-C compliance (if trained accordingly)
29
- - 📏 Automatically remove boilerplate and focus on `TEST(...)` blocks
30
-
31
  ---
32
 
33
  ## 🧠 Training Summary
@@ -69,3 +68,41 @@ int add(int a, int b) { return a + b; }
69
  inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
70
  outputs = model.generate(**inputs, max_new_tokens=512, eos_token_id=tokenizer.convert_tokens_to_ids("// END_OF_TESTS"))
71
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+
3
  license: apache-2.0
4
  language: c++
5
  tags:
6
+ - code-generation
7
+ - codellama
8
+ - peft
9
+ - unit-tests
10
+ - causal-lm
11
+ - text-generation
12
+ - embedded-systems
13
  base_model: codellama/CodeLlama-7b-hf
14
  model_type: llama
15
  pipeline_tag: text-generation
16
  ---
 
17
  # 🧪 CodeLLaMA Unit Test Generator — Full Merged Model (v2)
18
 
19
+ This is a **merged model** that combines [`codellama/CodeLlama-7b-hf`](https://huggingface.co/codellama/CodeLlama-7b-hf) with a LoRA adapter
20
+ fine-tuned on embedded C/C++ code and high-quality unit tests using GoogleTest and CppUTest. This version includes enhanced formatting, stop tokens,
21
+ and test cleanup mechanisms.
22
 
 
23
 
24
  ---
25
 
26
  ## 🎯 Use Cases
27
 
28
+ - Generate comprehensive unit tests for embedded C/C++ functions
29
+ - Focus on edge cases, boundaries, error handling
 
 
 
30
  ---
31
 
32
  ## 🧠 Training Summary
 
68
  inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
69
  outputs = model.generate(**inputs, max_new_tokens=512, eos_token_id=tokenizer.convert_tokens_to_ids("// END_OF_TESTS"))
70
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
71
+ ```
72
+
73
+
74
+ ## Training & Optimization Details
75
+
76
+ | Step | Description |
77
+ |---------------------|-----------------------------------------------------------------------------|
78
+ | **Dataset** | athrv/Embedded_Unittest2 (filtered for valid code-test pairs) |
79
+ | **Preprocessing** | Token length filtering (≤4096), special token injection |
80
+ | **Quantization** | 8-bit (BitsAndBytesConfig), llm_int8_threshold=6.0 |
81
+ | **LoRA Config** | r=64, alpha=32, dropout=0.1 on q_proj/v_proj/k_proj/o_proj |
82
+ | **Training** | 4 epochs, batch=4 (effective 8), lr=2e-4, FP16 |
83
+ | **Optimization** | Paged AdamW 8-bit, gradient checkpointing, custom data collator |
84
+ | **Special Tokens** | Added `<|system|>`, `<|user|>`, `<|assistant|>` |
85
+
86
+ ---
87
+
88
+ ## Tips for Best Results
89
+
90
+ - **Temperature:** 0.2–0.4
91
+ - **Top-p:** 0.85–0.95
92
+ - **Max New Tokens:** 256–512-1024-2048
93
+ - **Input Formatting:**
94
+ - Include complete function signatures
95
+ - Remove unnecessary comments
96
+ - Keep functions under 200 lines
97
+ - For long functions, split into logical units
98
+
99
+ ---
100
+
101
+ ## Feedback & Citation
102
+
103
+ **Dataset Credit:** `athrv/Embedded_Unittest2`
104
+ **Report Issues:** [Model's Hugging Face page](https://huggingface.co/Utkarsh524/codellama_utests_full_new_ver2)
105
+
106
+ **Maintainer:** Utkarsh524
107
+ **Model Version:** v2 (4-epoch trained)
108
+ ---