File size: 1,560 Bytes
30d71bf ce66860 30d71bf ce66860 30d71bf ce66860 30d71bf ce66860 30d71bf ce66860 30d71bf ce66860 11ba6a3 30d71bf 11ba6a3 ce66860 30d71bf 11ba6a3 ce66860 30d71bf 11ba6a3 ce66860 30d71bf 11ba6a3 ce66860 11ba6a3 30d71bf ce66860 30d71bf 11ba6a3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
# 🧠 AgriQA TinyLlama LoRA Adapter
This repository contains a [LoRA](https://arxiv.org/abs/2106.09685) adapter fine-tuned on the [AgriQA](https://huggingface.co/datasets/shchoi83/agriQA) dataset using the [TinyLlama/TinyLlama-1.1B-Chat](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat) base model.
---
## 🔧 Model Details
- **Base Model**: [`TinyLlama/TinyLlama-1.1B-Chat`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat)
- **Adapter Type**: LoRA (Low-Rank Adaptation)
- **Adapter Size**: ~4.5MB
- **Dataset**: [`shchoi83/agriQA`](https://huggingface.co/datasets/shchoi83/agriQA)
- **Language**: English
- **Task**: Instruction-tuned Question Answering in Agriculture domain
- **Trained by**: [@theone049](https://huggingface.co/theone049)
---
## 📌 Usage
To use this adapter, load it on top of the base model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat")
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat")
# Load adapter
model = PeftModel.from_pretrained(base_model, "theone049/agriqa-tinyllama-lora-adapter")
# Run inference
prompt = """### Instruction:
Answer the agricultural question.
### Input:
What is the ideal pH range for growing rice?
### Response:"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|