Update README.md
Browse files
README.md
CHANGED
@@ -18,45 +18,7 @@ tags:
|
|
18 |
- lora
|
19 |
---
|
20 |
|
21 |
-
<!-- This model card has been generated automatically according to the information the training script had access to. You
|
22 |
-
should probably proofread and complete it, then remove this comment. -->
|
23 |
-
|
24 |
-
|
25 |
-
# LoRA text2image fine-tuning - AlexeyGHT/fine_tuning_gen
|
26 |
-
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the AlexeyGHT/Iris_gen dataset. You can find some example images in the following.
|
27 |
-
|
28 |

|
29 |

|
30 |

|
31 |
-

|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
## Intended uses & limitations
|
36 |
-
|
37 |
-
#### How to use
|
38 |
-
|
39 |
-
```python
|
40 |
-
# TODO: add an example code snippet for running this diffusion pipeline
|
41 |
-
```
|
42 |
-
|
43 |
-
#### Limitations and bias
|
44 |
-
|
45 |
-
[TODO: provide examples of latent issues and potential remediations]
|
46 |
-
|
47 |
-
## Training details
|
48 |
-
|
49 |
-
[TODO: describe the data used to train the model]
|
50 |
-
|
51 |
-
!accelerate launch --mixed_precision="no" /content/diffusers/examples/text_to_image/train_text_to_image_lora.py \
|
52 |
-
--pretrained_model_name_or_path=$MODEL_NAME \
|
53 |
-
--dataset_name=$DATASET_NAME --caption_column="text" \
|
54 |
-
--resolution=512 --random_flip \
|
55 |
-
--train_batch_size=6 \
|
56 |
-
--num_train_epochs=50 --checkpointing_steps=450 \
|
57 |
-
--learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
|
58 |
-
--seed=42 \
|
59 |
-
--output_dir="fine_tuning_gen" \
|
60 |
-
--validation_prompt "Iris of the eye, web pattern, blue and light blue, beautiful complex pattern" \
|
61 |
-
--report_to="wandb" \
|
62 |
-
--push_to_hub
|
|
|
18 |
- lora
|
19 |
---
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |

|
22 |

|
23 |

|
24 |
+

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|