github huggingface/diffusers v0.16.0
v0.16.0 DeepFloyd IF & ControlNet v1.1

latest releases: v0.31.0, v0.30.3, v0.30.2...
18 months ago

DeepFloyd's IF: The open-sourced Imagen

IF

IF is a pixel-based text-to-image generation model and was released in late April 2023 by DeepFloyd.

The model architecture is strongly inspired by Google's closed-sourced Imagen and a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding:

nabla (1)

Installation

pip install torch --upgrade  # diffusers' IF is optimized for torch 2.0
pip install diffusers --upgrade

Accept the License

Before you can use IF, you need to accept its usage conditions. To do so:

  1. Make sure to have a Hugging Face account and be logged in
  2. Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0
  3. Log-in locally
from huggingface_hub import login

login()

and enter your Hugging Face Hub access token.

Code example

from diffusers import DiffusionPipeline
from diffusers.utils import pt_to_pil
import torch

# stage 1
stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
stage_1.enable_model_cpu_offload()

# stage 2
stage_2 = DiffusionPipeline.from_pretrained(
    "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_model_cpu_offload()

# stage 3
safety_modules = {
    "feature_extractor": stage_1.feature_extractor,
    "safety_checker": stage_1.safety_checker,
    "watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16
)
stage_3.enable_model_cpu_offload()

prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
generator = torch.manual_seed(1)

# text embeds
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt)

# stage 1
image = stage_1(
    prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt"
).images
pt_to_pil(image)[0].save("./if_stage_I.png")
# stage 2
image = stage_2(
    image=image,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
pt_to_pil(image)[0].save("./if_stage_II.png")
# stage 3
image = stage_3(prompt=prompt, image=image, noise_level=100, generator=generator).images
image[0].save("./if_stage_III.png")

For more details about speed and memory optimizations, please have a look at the blog or docs below.

Useful links

๐Ÿ‘‰ The official codebase
๐Ÿ‘‰ Blog post
๐Ÿ‘‰ Space Demo
๐Ÿ‘‰ In-detail docs

ControlNet v1.1

Lvmin Zhang has released improved ControlNet checkpoints as well as a couple of new ones.

You can find all ๐Ÿงจ Diffusers checkpoints here
Please have a look directly at the model cards on how to use the checkpoins:

Improved checkpoints:

Model Name Control Image Overview Control Image Example Generated Image Example
lllyasviel/control_v11p_sd15_canny
Trained with canny edge detection
A monochrome image with white edges on a black background.
lllyasviel/control_v11p_sd15_mlsd
Trained with multi-level line segment detection
An image with annotated line segments.
lllyasviel/control_v11f1p_sd15_depth
Trained with depth estimation
An image with depth information, usually represented as a grayscale image.
lllyasviel/control_v11p_sd15_normalbae
Trained with surface normal estimation
An image with surface normal information, usually represented as a color-coded image.
lllyasviel/control_v11p_sd15_seg
Trained with image segmentation
An image with segmented regions, usually represented as a color-coded image.
lllyasviel/control_v11p_sd15_lineart
Trained with line art generation
An image with line art, usually black lines on a white background.
lllyasviel/control_v11p_sd15_openpose
Trained with human pose estimation
An image with human poses, usually represented as a set of keypoints or skeletons.
lllyasviel/control_v11p_sd15_scribble
Trained with scribble-based image generation
An image with scribbles, usually random or user-drawn strokes.
lllyasviel/control_v11p_sd15_softedge
Trained with soft edge image generation
An image with soft edges, usually to create a more painterly or artistic effect.

New checkpoints:

Model Name Control Image Overview Control Image Example Generated Image Example
lllyasviel/control_v11e_sd15_ip2p
Trained with pixel to pixel instruction
No condition .
lllyasviel/control_v11p_sd15_inpaint
Trained with image inpainting
No condition.
lllyasviel/control_v11e_sd15_shuffle
Trained with image shuffling
An image with shuffled patches or regions.
lllyasviel/control_v11p_sd15s2_lineart_anime
Trained with anime line art generation
An image with anime-style line art.
ย 

All commits

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @1lint
    • add from_ckpt method as Mixin (#2318)
  • @asfiyab-nvidia
    • Add TensorRT SD/txt2img Community Pipeline to diffusers along with TensorRT utils (#2974)
    • Fix TensorRT community pipeline device set function (#3157)
  • @nupurkmr9
    • adding custom diffusion training to diffusers examples (#3031)
  • @XinyuYe-Intel
    • Added distillation for quantization example on textual inversion. (#2760)
  • @SkyTNT
    • [Community Pipelines] Update lpw_stable_diffusion pipeline (#3197)

Don't miss a new diffusers release

NewReleases is sending notifications on new releases.