Update README.md

#27
by sayakpaul HF Staff - opened
Files changed (1) hide show
  1. README.md +11 -10
README.md CHANGED
@@ -35,7 +35,7 @@ Key Features:
35
 
36
  Install the latest version of diffusers
37
  ```
38
- pip install git+https://github.com/huggingface/diffusers
39
  ```
40
 
41
  The following contains a code snippet illustrating how to use the model to generate images based on text prompts:
@@ -47,10 +47,8 @@ import torch
47
 
48
  from diffusers import QwenImageEditPipeline
49
 
50
- pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit")
51
  print("pipeline loaded")
52
- pipeline.to(torch.bfloat16)
53
- pipeline.to("cuda")
54
  pipeline.set_progress_bar_config(disable=None)
55
  image = Image.open("./input.png").convert("RGB")
56
  prompt = "Change the rabbit's color to purple, with a flash light background."
@@ -63,14 +61,17 @@ inputs = {
63
  "num_inference_steps": 50,
64
  }
65
 
66
- with torch.inference_mode():
67
- output = pipeline(**inputs)
68
- output_image = output.images[0]
69
- output_image.save("output_image_edit.png")
70
- print("image saved at", os.path.abspath("output_image_edit.png"))
71
-
72
  ```
73
 
 
 
 
 
 
74
  ## Showcase
75
  One of the highlights of Qwen-Image-Edit lies in its powerful capabilities for semantic and appearance editing. Semantic editing refers to modifying image content while preserving the original visual semantics. To intuitively demonstrate this capability, let's take Qwen's mascot—Capybara—as an example:
76
  ![Capibara](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片3.JPG#center)
 
35
 
36
  Install the latest version of diffusers
37
  ```
38
+ pip install -U diffusers
39
  ```
40
 
41
  The following contains a code snippet illustrating how to use the model to generate images based on text prompts:
 
47
 
48
  from diffusers import QwenImageEditPipeline
49
 
50
+ pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit", torch_dtype=torch.bfloat16).to("cuda")
51
  print("pipeline loaded")
 
 
52
  pipeline.set_progress_bar_config(disable=None)
53
  image = Image.open("./input.png").convert("RGB")
54
  prompt = "Change the rabbit's color to purple, with a flash light background."
 
61
  "num_inference_steps": 50,
62
  }
63
 
64
+ output = pipeline(**inputs)
65
+ output_image = output.images[0]
66
+ output_image.save("output_image_edit.png")
67
+ print("image saved at", os.path.abspath("output_image_edit.png"))
 
 
68
  ```
69
 
70
+ You can explore further optimization options in Diffusers by checking out the links below:
71
+
72
+ * [Accelerating inference](https://huggingface.co/docs/diffusers/main/en/optimization/fp16)
73
+ * [Reducing memory consumption](https://huggingface.co/docs/diffusers/main/en/optimization/memory)
74
+
75
  ## Showcase
76
  One of the highlights of Qwen-Image-Edit lies in its powerful capabilities for semantic and appearance editing. Semantic editing refers to modifying image content while preserving the original visual semantics. To intuitively demonstrate this capability, let's take Qwen's mascot—Capybara—as an example:
77
  ![Capibara](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片3.JPG#center)