{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "private_outputs": true, "provenance": [] }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "language_info": { "name": "python" } }, "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Import / Convert Replicate LoRA to Hugging Face\n", "\n", "Convert and import a LoRA or LoRAs you trained with the Replicate trainer to Hugging Face." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "EzgI2tPTO97I" }, "outputs": [], "source": [ "#@markdown Install OS dependencies\n", "!apt-get install -y skopeo\n", "!apt-get install -y jq" ] }, { "cell_type": "code", "source": [ "#@markdown Install python dependencies\n", "!pip install huggingface_hub" ], "metadata": { "id": "jitE6IrayFDH" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "#@markdown Choose the Replicate SDXL LoRA repository you would like to upload to Hugging Face (you don't need to be the author). Grab your Replicate token [here](https://replicate.com/account/api-tokens)\n", "import requests\n", "import json\n", "\n", "replicate_model = \"fofr/sdxl-emoji\" #@param {type: \"string\"}\n", "replicate_token = \"r8_***\" #@param {type: \"string\"}\n", "\n", "headers = { \"Authorization\": f\"Token {replicate_token}\" }\n", "url = f\"https://api.replicate.com/v1/models/{replicate_model}\"\n", "\n", "response = requests.get(url, headers=headers)\n", "model_data = response.json()\n", "model_latest_version = model_data['latest_version']['id']\n", "lora_name = model_data['name']\n", "lora_author = model_data['owner']\n", "lora_description = model_data['description']\n", "lora_url = model_data['url']\n", "lora_image = model_data['cover_image_url']\n", "lora_docker_image = f\"{lora_name}@sha256:{model_latest_version}\"\n", "default_prompt = model_data[\"default_example\"][\"input\"][\"prompt\"]" ], "metadata": { "id": "1SNPPvVVUk5T" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "!skopeo inspect docker://r8.im/lucataco/sdxl-panoramic@sha256:76acc4075d0633dcb3823c1fed0419de21d42001b65c816c7b5b9beff30ec8cd" ], "metadata": { "id": "_EYVnbuUV5yd" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "!sh data.sh" ], "metadata": { "id": "1wv2AxI2eVp9" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "#@markdown Grab the trained LoRA and unTAR to a folder\n", "cmd = f'skopeo inspect docker://r8.im/{replicate_model}@sha256:{model_latest_version} --config | jq -r \\'.config.Env[] | select(startswith(\"COG_WEIGHTS=\"))\\' | awk -F= \\'{{print $2}}\\''\n", "print(cmd)\n", "url = !{cmd}\n", "print(url)\n", "url = url[0]\n", "tar_name = url.split(\"/\")[-1]\n", "folder_name = \"lora_folder\" #@param {type:\"string\"}\n", "!mkdir {folder_name}\n", "!wget {url}\n", "!tar -xvf {tar_name} -C {folder_name}" ], "metadata": { "id": "cdtnTm0GPLFH" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "#@markdown Login with Hugging Face Hub (pick a `write` token)\n", "from huggingface_hub import notebook_login, upload_folder, create_repo\n", "notebook_login()" ], "metadata": { "id": "eV6ApIY6dU4K" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "#@markdown Insert the `hf_repo` you would like to upload this model to. It has to be either the you logged in above from (e.g.: `fofr`, `zeke`, `nateraw`) or an organization you are part of (e.g.: `replicate`)\n", "hf_repo = \"multimodalart\" #@param {type: \"string\"}" ], "metadata": { "id": "5LCbCbjZdnZZ" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "#@markdown Create HF model repo `hf_repo/lora_name`\n", "hf_model_slug = f\"{hf_repo}/{lora_name}\"\n", "create_repo(hf_model_slug, repo_type=\"model\")" ], "metadata": { "id": "ZVWoAy1U2cis" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "#@markdown Set up the `README.md` for the HF model repo.\n", "\n", "#Replaces the nicename token with the\n", "#model's token as specified in the `special_params.json`\n", "replaced_prompt = default_prompt\n", "activation_triggers = []\n", "with open(f'{folder_name}/special_params.json', 'r') as f:\n", " token_data = json.load(f)\n", "for key, value in token_data.items():\n", " replaced_prompt = replaced_prompt.replace(key, value)\n", " activation_triggers.append(value)\n", "comma_activation_triggers = ', '.join(map(str, activation_triggers))\n", "README_TEXT = f'''---\n", "license: creativeml-openrail-m\n", "tags:\n", " - text-to-image\n", " - stable-diffusion\n", " - lora\n", " - diffusers\n", " - pivotal-tuning\n", "base_model: stabilityai/stable-diffusion-xl-base-1.0\n", "pivotal_tuning: true\n", "textual_embeddings: embeddings.pti\n", "instance_prompt: {comma_activation_triggers}\n", "inference: true\n", "---\n", "# {lora_name} LoRA by [{lora_author}](https://replicate.com/{lora_author})\n", "### {lora_description}\n", "\n", "![lora_image]({lora_image})\n", ">\n", "\n", "## Inference with Replicate API\n", "Grab your replicate token [here](https://replicate.com/account)\n", "```bash\n", "pip install replicate\n", "export REPLICATE_API_TOKEN=r8_*************************************\n", "```\n", "\n", "```py\n", "import replicate\n", "\n", "output = replicate.run(\n", " \"{lora_docker_image}\",\n", " input={{\"prompt\": \"{default_prompt}\"}}\n", ")\n", "print(output)\n", "```\n", "You may also do inference via the API with Node.js or curl, and locally with COG and Docker, [check out the Replicate API page for this model]({lora_url}/api)\n", "\n", "## Inference with 🧨 diffusers\n", "Replicate SDXL LoRAs are trained with Pivotal Tuning, which combines training a concept via Dreambooth LoRA with training a new token with Textual Inversion.\n", "As `diffusers` doesn't yet support textual inversion for SDXL, we will use cog-sdxl `TokenEmbeddingsHandler` class.\n", "\n", "The trigger tokens for your prompt will be `{comma_activation_triggers}`\n", "\n", "```shell\n", "pip install diffusers transformers accelerate safetensors huggingface_hub\n", "git clone https://github.com/replicate/cog-sdxl cog_sdxl\n", "```\n", "\n", "```py\n", "import torch\n", "from huggingface_hub import hf_hub_download\n", "from diffusers import DiffusionPipeline\n", "from safetensors.torch import load_file\n", "from diffusers.models import AutoencoderKL\n", "\n", "pipe = DiffusionPipeline.from_pretrained(\n", " \"stabilityai/stable-diffusion-xl-base-1.0\",\n", " torch_dtype=torch.float16,\n", " variant=\"fp16\",\n", ").to(\"cuda\")\n", "\n", "pipe.load_lora_weights(\"{hf_model_slug}\", weight_name=\"lora.safetensors\")\n", "\n", "embedding_path = hf_hub_download(repo_id=\"{hf_model_slug}\", filename=\"embeddings.pti\", repo_type=\"model\")\n", "\n", "state_dict = load_file(embedding_path)\n", "\n", "pipe.load_textual_inversion(state_dict[\"text_encoders_0\"], token=[\"\", \"\"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)\n", "pipe.load_textual_inversion(state_dict[\"text_encoders_1\"], token=[\"\", \"\"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)\n", "\n", "prompt=\"{replaced_prompt}\"\n", "images = pipe(\n", " prompt,\n", " cross_attention_kwargs={{\"scale\": 0.8}},\n", ").images\n", "#your output image\n", "images[0]\n", "```\n", "'''\n", "\n", "with open(f'{folder_name}/README.md', 'w') as f:\n", " f.write(README_TEXT)" ], "metadata": { "id": "tEaGfGz0RRMK" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "#@markdown Upload the repo to HF!\n", "upload_folder(\n", " folder_path=folder_name,\n", " repo_id=hf_model_slug,\n", " repo_type=\"model\"\n", ")" ], "metadata": { "id": "_8MGlxgBgKyT" }, "execution_count": null, "outputs": [] } ] }