base_model: AlignmentResearch/Llama-3.3-Tiny-Instruct | |
# Random LoRA Adapter for tiny-random-Llama-3 | |
This is a randomly initialized LoRA adapter for the `AlignmentResearch/Llama-3.3-Tiny-Instruct` model. | |
## Details | |
- **Base model**: AlignmentResearch/Llama-3.3-Tiny-Instruct | |
- **Seed**: 0 | |
- **LoRA rank**: 16 | |
- **LoRA alpha**: 32 | |
- **Target modules**: q_proj, v_proj, k_proj, o_proj | |
## Usage | |
```python | |
from peft import PeftModel | |
from transformers import AutoModelForCausalLM, AutoTokenizer | |
# Load base model | |
base_model = AutoModelForCausalLM.from_pretrained("AlignmentResearch/Llama-3.3-Tiny-Instruct") | |
tokenizer = AutoTokenizer.from_pretrained("AlignmentResearch/Llama-3.3-Tiny-Instruct") | |
# Load LoRA adapter | |
model = PeftModel.from_pretrained(base_model, "AlignmentResearch/Llama-3.3-Tiny-Instruct-lora-0") | |
``` | |
This adapter was created for testing purposes and contains random weights. | |