File size: 3,328 Bytes
448330c 72d89a9 64d3d2a 246f2da 72d89a9 0c796ee 72d89a9 448330c 66c4a56 448330c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
---
license: mit
datasets:
- tuanha1305/DeepSeek-R1-Distill
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
---
# Qwen2.5-1.5B-Instruct-Lora-Deepseek-R1
This model is a LoRA (Low-Rank Adaptation) fine-tuned version of **Qwen2.5-1.5B-Instruct**, specifically fine-tuned on the **DeepSeek-R1-Distill** dataset. LoRA was applied to the query (`q`), key (`k`), and value (`v`) projection matrices.
**Base Model:** [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
**Dataset:** [tuanha1305/DeepSeek-R1-Distill](https://huggingface.co/datasets/tuanha1305/DeepSeek-R1-Distill)
---
## Training Details
- **Hardware:** 1 × NVIDIA A100 GPU (80GB HBM)
- **Training Time:** ~7 hours and 17 minutes
- **Total Steps:** 9000
- **Fine-tuning Method:** LoRA (q, k, v)
---
## Evaluation on MATH-500 Benchmark
After following the sampling-based Pass@1 methodology inspired by [DeepSeek R1](https://arxiv.org/abs/2501.12948), we have
| Parameter | Value |
|------------------|---------|
| **dataset** | `HuggingFaceH4/MATH-500` |
| **temperature** | `0.6` |
| **top_p** | `0.95` |
| **max_new_tokens** | `2048` |
| **Num_samples** | `16` per question |
**Pass@1:** **54.60%** (273 out of 500 questions)
*This metric represents the percentage of questions with at least one correct solution among multiple generated attempts.*
---
## How to Use
Example Python Script
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "PursuitOfDataScience/Qwen2.5-1.5B-Instruct-Lora-Deepseek-R1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
test_prompt = "Instruction: Explain how machine learning works\nResponse:"
inputs = tokenizer(test_prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
inputs.input_ids,
max_length=200,
temperature=0.7,
top_p=0.95,
do_sample=True
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"\nGenerated response:\n{generated_text}")
```
### Sample Output
```
Instruction: Explain how machine learning works
Response: Machine learning is a subset of artificial intelligence that allows computers to learn from data without being explicitly programmed. It involves using algorithms and statistical models to analyze patterns, trends, or relationships in large sets of data and then making predictions or decisions based on these insights.
Here's an overview of the key steps involved in implementing a machine learning model:
1. Data collection: Gather historical data relevant to your problem domain.
2. Data preprocessing: Cleanse, normalize, and transform raw data into a format suitable for analysis.
3. Feature selection: Identify important features (variables) that can help predict outcomes.
4. Model training: Train various machine learning algorithms on subsets of labeled data.
5. Model evaluation: Assess performance metrics like accuracy, precision, recall, etc., using test datasets.
6. Model tuning: Optimize hyperparameters and tweak algorithm settings to improve predictive power.
7. Deployment: Implement trained models in production systems for real-time predictions.
```
|