ButterChicken98's picture
Update README.md
17441e2 verified
---
base_model: stabilityai/stable-diffusion-2
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - ButterChicken98/plantVillage-stableDiffusion-2-iter2_with_one_caption
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the ButterChicken98/plantvillage-image-text-pairs dataset. You can find some example images in the following.
![img_0](./image_0.png)
![img_1](./image_1.png)
![img_2](./image_2.png)
![img_3](./image_3.png)
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
fine tuned using script>
``` bash
! accelerate launch /kaggle/working/diffusers/examples/text_to_image/train_text_to_image_lora.py \
--pretrained_model_name_or_path="stabilityai/stable-diffusion-2" \
--dataset_name="ButterChicken98/plantvillage-image-text-pairs" \
--dataloader_num_workers=8 \
--resolution=256 --random_flip \
--train_batch_size=16 \
--gradient_accumulation_steps=4 \
--max_train_steps=3000 \
--learning_rate=1e-04 \
--max_grad_norm=1 \
--lr_scheduler="cosine" --lr_warmup_steps=0 \
--output_dir="/kaggle/working" \
--push_to_hub \
--caption_column="caption" \
--image_column="image" \
--hub_model_id="plantVillage-stableDiffusion-2-iter2_with_one_caption" \
--checkpointing_steps=1000 \
--validation_prompt="Tomato YellowLeaf Curl Virus" \
--seed=1337
```