Update README.md
Browse files
README.md
CHANGED
@@ -89,12 +89,19 @@ special_tokens:
|
|
89 |
This model is a fine tuned [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on a synthetic dataset of C++ → Python code translations from Codeforces.
|
90 |
|
91 |
📦 GitHub repo: [DemoVersion/cf-llm-finetune](https://github.com/DemoVersion/cf-llm-finetune)
|
|
|
92 |
📑 Dataset Creation [DATASET.md](https://github.com/DemoVersion/cf-llm-finetune/blob/main/DATASET.md)
|
|
|
93 |
📑 Training [TRAIN.md](https://github.com/DemoVersion/cf-llm-finetune/blob/main/TRAIN.md)
|
|
|
94 |
📚 Dataset on Hugging Face: [demoversion/cf-cpp-to-python-code-generation](https://huggingface.co/datasets/demoversion/cf-cpp-to-python-code-generation)
|
95 |
|
96 |
For dataset generation, training, and inference check the [Github repo](https://github.com/DemoVersion/cf-llm-finetune).
|
97 |
|
|
|
|
|
|
|
|
|
98 |
## Model description
|
99 |
|
100 |
A lightweight LLaMA 3.2 model fine-tuned for competitive programming code translation, from ICPC-style C++ to Python using LoRA adapters.
|
|
|
89 |
This model is a fine tuned [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on a synthetic dataset of C++ → Python code translations from Codeforces.
|
90 |
|
91 |
📦 GitHub repo: [DemoVersion/cf-llm-finetune](https://github.com/DemoVersion/cf-llm-finetune)
|
92 |
+
|
93 |
📑 Dataset Creation [DATASET.md](https://github.com/DemoVersion/cf-llm-finetune/blob/main/DATASET.md)
|
94 |
+
|
95 |
📑 Training [TRAIN.md](https://github.com/DemoVersion/cf-llm-finetune/blob/main/TRAIN.md)
|
96 |
+
|
97 |
📚 Dataset on Hugging Face: [demoversion/cf-cpp-to-python-code-generation](https://huggingface.co/datasets/demoversion/cf-cpp-to-python-code-generation)
|
98 |
|
99 |
For dataset generation, training, and inference check the [Github repo](https://github.com/DemoVersion/cf-llm-finetune).
|
100 |
|
101 |
+
**📚 Main medium article**: [Toward fine-tuning Llama 3.2 using PEFT for Code Generation](https://medium.com/@haddadhesam/towards-fine-tuning-llama-3-2-using-peft-for-code-generation-63e3991c26db)
|
102 |
+
|
103 |
+
**📚 Medium article for inference with GGUF format**: [How to inference with GGUF format](https://haddadhesam.medium.com/one-file-to-rule-them-all-gguf-for-local-llm-testing-and-deployment-208b85934434)
|
104 |
+
|
105 |
## Model description
|
106 |
|
107 |
A lightweight LLaMA 3.2 model fine-tuned for competitive programming code translation, from ICPC-style C++ to Python using LoRA adapters.
|