--- base_model: Alpha-VLLM/Lumina-Image-2.0 library_name: diffusers license: apache-2.0 instance_prompt: a photo of sks dog widget: - text: A photo of sks dog in a bucket output: url: image_0.png - text: A photo of sks dog in a bucket output: url: image_1.png - text: A photo of sks dog in a bucket output: url: image_2.png - text: A photo of sks dog in a bucket output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - lumina2 - lumina2-diffusers - template:sd-lora --- # Lumina2 DreamBooth LoRA - Dino-LeeTaeHun/trained-lumina2-lora3 ## Model description These are Dino-LeeTaeHun/trained-lumina2-lora3 DreamBooth LoRA weights for Alpha-VLLM/Lumina-Image-2.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Lumina2 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_lumina2.md). ## Trigger words You should use `a photo of sks dog` to trigger the image generation. The following `system_prompt` was also used used during training (ignore if `None`): None. ## Download model [Download the *.safetensors LoRA](Dino-LeeTaeHun/trained-lumina2-lora3/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py TODO ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]