|
|
--- |
|
|
base_model: CompVis/stable-diffusion-v1-4 |
|
|
library_name: diffusers |
|
|
license: creativeml-openrail-m |
|
|
inference: true |
|
|
tags: |
|
|
- stable-diffusion |
|
|
- stable-diffusion-diffusers |
|
|
- text-to-image |
|
|
- diffusers |
|
|
- diffusers-training |
|
|
- lora |
|
|
- stable-diffusion |
|
|
- stable-diffusion-diffusers |
|
|
- text-to-image |
|
|
- diffusers |
|
|
- diffusers-training |
|
|
- lora |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the training script had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
|
|
|
# LoRA text2image fine-tuning - AlexeyGHT/fine_tuning_gen |
|
|
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the AlexeyGHT/Iris_gen dataset. You can find some example images in the following. |
|
|
|
|
|
 |
|
|
 |
|
|
 |
|
|
 |
|
|
|
|
|
|
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
#### How to use |
|
|
|
|
|
```python |
|
|
# TODO: add an example code snippet for running this diffusion pipeline |
|
|
``` |
|
|
|
|
|
#### Limitations and bias |
|
|
|
|
|
[TODO: provide examples of latent issues and potential remediations] |
|
|
|
|
|
## Training details |
|
|
|
|
|
[TODO: describe the data used to train the model] |
|
|
|
|
|
!accelerate launch --mixed_precision="no" /content/diffusers/examples/text_to_image/train_text_to_image_lora.py \ |
|
|
--pretrained_model_name_or_path=$MODEL_NAME \ |
|
|
--dataset_name=$DATASET_NAME --caption_column="text" \ |
|
|
--resolution=512 --random_flip \ |
|
|
--train_batch_size=6 \ |
|
|
--num_train_epochs=50 --checkpointing_steps=450 \ |
|
|
--learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \ |
|
|
--seed=42 \ |
|
|
--output_dir="fine_tuning_gen" \ |
|
|
--validation_prompt "Iris of the eye, web pattern, blue and light blue, beautiful complex pattern" \ |
|
|
--report_to="wandb" \ |
|
|
--push_to_hub |