π¨ Finetuned Stable Diffusion inpainting
The first official stable diffusion checkpoint fine-tuned on inpainting has been released.
You can try it out in the official demo here
or code it up yourself π» :
from io import BytesIO
import torch
import PIL
import requests
from diffusers import StableDiffusionInpaintPipeline
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
pipe = StableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
revision="fp16",
torch_dtype=torch.float16,
)
pipe.to("cuda")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
output = pipe(prompt=prompt, image=image, mask_image=mask_image)
image = output.images[0]
gives:
image
| mask_image
| prompt
| Output | |
---|---|---|---|---|
Face of a yellow cat, high resolution, sitting on a park bench | => |
β οΈ This release deprecates the unsupervised noising-based inpainting pipeline into StableDiffusionInpaintPipelineLegacy
.
The new StableDiffusionInpaintPipeline
is based on a Stable Diffusion model finetuned for the inpainting task: https://huggingface.co/runwayml/stable-diffusion-inpainting
Note
When loadingStableDiffusionInpaintPipeline
with a non-finetuned model (i.e. the one saved withdiffusers<=0.5.1
), the pipeline will default toStableDiffusionInpaintPipelineLegacy
, to maintain backward compatibility β¨
from diffusers import StableDiffusionInpaintPipeline
pipe = StableDiffusionInpaintPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
assert pipe.__class__ .__name__ == "StableDiffusionInpaintPipelineLegacy"
Context:
Why this change? When Stable Diffusion came out ~2 months ago, there were many unofficial in-painting demos using the original v1-4 checkpoint ("CompVis/stable-diffusion-v1-4"
). These demos worked reasonably well, so that we integrated an experimental StableDiffusionInpaintPipeline
class into diffusers
. Now that the official inpainting checkpoint was released: https://github.com/runwayml/stable-diffusion we decided to make this our official pipeline and move the old / hacky one to "StableDiffusionInpaintPipelineLegacy"
.
π ONNX pipelines for image2image and inpainting
Thanks to the contribution by @zledas (#552) this release supports OnnxStableDiffusionImg2ImgPipeline
and OnnxStableDiffusionInpaintPipeline
optimized for CPU inference:
from diffusers import OnnxStableDiffusionImg2ImgPipeline, OnnxStableDiffusionInpaintPipeline
img_pipeline = OnnxStableDiffusionImg2ImgPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="onnx", provider="CPUExecutionProvider"
)
inpaint_pipeline = OnnxStableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting", revision="onnx", provider="CPUExecutionProvider"
)
π Community Pipelines
Two new community pipelines have been added to diffusers
π₯
Stable Diffusion Interpolation example
Interpolate the latent space of Stable Diffusion between different prompts/seeds.
For more info see stable-diffusion-videos.
For a code example, see Stable Diffusion Interpolation
Stable Diffusion Interpolation Mega
One Stable Diffusion Pipeline with all functionalities of Text2Image, Image2Image and Inpainting
For a code example, see Stable Diffusion Mega
- All in one Stable Diffusion Pipeline by @patrickvonplaten in #821
π Changelog
- [Community] One step unet by @patrickvonplaten in #840
- Remove unneeded use_auth_token by @osanseviero in #839
- Bump to 0.6.0.dev0 by @anton-l in #831
- Remove the last of ["sample"] by @anton-l in #842
- Fix Flax pipeline: width and height are ignored #838 by @camenduru in #848
- [DeviceMap] Make sure stable diffusion can be loaded from older trans⦠by @patrickvonplaten in #860
- Fix small community pipeline import bug and finish README by @patrickvonplaten in #869
- Fix training push_to_hub (unconditional image generation): models were not saved before pushing to hub by @pcuenca in #868
- Fix table in community README.md by @nateraw in #879
- Add generic inference example to community pipeline readme by @apolinario in #874
- Rename frame filename in interpolation community example by @nateraw in #881
- Add Apple M1 tests by @anton-l in #796
- Fix autoencoder test by @pcuenca in #886
- Rename StableDiffusionOnnxPipeline -> OnnxStableDiffusionPipeline by @anton-l in #887
- Fix DDIM on Windows not using int64 for timesteps by @hafriedlander in #819
- [dreambooth] allow fine-tuning text encoder by @patil-suraj in #883
- Stable Diffusion image-to-image and inpaint using onnx. by @zledas in #552
- Improve ONNX img2img numpy handling, temporarily fix the tests by @anton-l in #899
- [Stable Diffusion Inpainting] Deprecate inpainting pipeline in favor of official one by @patrickvonplaten in #903
- [Communit Pipeline] Make sure "mega" uses correct inpaint pipeline by @patrickvonplaten in #908
- Stable diffusion inpainting by @patil-suraj in #904
- ONNX supervised inpainting by @anton-l in #906