github invoke-ai/InvokeAI v3.0.0+b9
InvokeAI Version 3.0.0 Beta-9

latest releases: v5.4.1rc2, v5.4.1rc1, v5.4.0...
pre-release16 months ago

We are pleased to announce a new beta release of InvokeAI 3.0 for user testing.

  • What's New
  • Getting Started with SDXL
  • What's Missing
  • Installation and Upgrading
  • Getting Help
  • Development Roadmap
  • Detailed Change Log

Please use the 3.0.0 release discussion thread, InvokeAI Issues, or the InvokeAI Discord Server to report bugs and other issues.

Recent fixes

  • Stable Diffusion XL (SDXL) 0.9 support in the node editor. See Getting Started with SDXL
  • Stable Diffusion XL models added to the optional starter models presented by the model installer
  • Memory and performance improvements for XL models (thanks to @StAlKeR7779)
  • Image upscaling using the latest version of RealESRGAN (fixed thanks to @psychedelicious )
  • VRAM optimizations to allow SDXL to run on 8 GB VRAM environments.
  • Feature-complete Model Manager in the Web GUI to provide online model installation, configuration and deletion.
  • Recommended LoRA and ControlNet models added to model installer.
  • UI tweaks, including updated hotkeys.
  • Translation and tooltip fixes
  • Documentation fixes, including description of all options in invokeai.yaml
  • Improved support for half-precision generation on Macintoshes.
  • Improved long prompt support.
  • Fix "Package 'invokeai' requires a different Python:" error

Known bug in this beta If you are installing InvokeAI completely from scratch, on the very first image generation you may get a black screen. Just reload the web page and the problem will be resolved for this and subsequent generations.

What's New in v3.0.0

Quite a lot has changed, both internally and externally

Web User Interface:

  • A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
  • A Dynamic Prompts interface that lets you generate combinations of prompt elements.
  • SDXL support
  • A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
  • The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
  • A graphical node editor that lets you design and execute complex image generation operations using a point-and-click interface (see below for more about nodes)
  • Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM used by each model by half.
  • Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
  • Long prompt support (>77 tokens)
  • Schedulers that did not work properly for Canvas inpainting have been fixed.

The WebUI can now be launched from the command line using either invokeai-web (preferred new way) or invokeai --web (deprecated old way).

Command Line Tool

  • The previous command line tool has been removed and replaced with a new developer-oriented tool invokeai-node-cli that allows you to experiment with InvokeAI nodes.

Installer

The console-based model installer, invokeai-model-install has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.

Internal

Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as "nodes", which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.

Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. As of the current time (July 18, 2023) SDXL had not been officially released, but a pre-release 0.9 version is widely available. InvokeAI provides support for SDXL image generation via its Nodes Editor, a user interface that allows you to create and customize complex image generation pipelines using a drag-and-drop interface. Currently SDXL generation is not directly supported in the text2image, image2image, and canvas panels, but we expect to add this feature in the next few days.

SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete. It is not clear whether SDXL 1.0, when it is released, will require the refiner.

To experiment with SDXL, you'll need the "base" and "refiner" models. Currently a beta version of SDXL, version 0.9, is available from HuggingFace for research purposes. To obtain access, you will need to register with HF at https://huggingface.co/join, obtain an access token at https://huggingface.co/settings/tokens, and add the access token to your environment. The easiest way to do the latter is to run the InvokeAI configure script, launcher option [6], and cut and paste the access token into the access token field. Save the changes.

Now navigate to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9 and fill out the access request form for research use. You will be granted instant access to download. Next launch the InvokeAI console-based model installer by selecting launcher option [5] or by activating the virtual environment and giving the command invokeai-model-install. In the STARTER MODELS section, select the checkboxes for stable-diffusion-xl-base-0-9 and stable-diffusion-xl-refiner-0-9. Press Apply Changes to install the models and keep the installer running, or Apply Changes and Exit to install the models and exit back to the launcher menu.

Alternatively you can install these models from the Web UI Model Manager (cube at the bottom of the left-hand panel) and navigate to Import Models. In the field labeled Location type in the repo id of the base model, which is stabilityai/stable-diffusion-xl-base-0.9. Press Add Model and wait for the model to download and install (the page will freeze while this is happening). After receiving confirmation that the model installed, repeat with stabilityai/stable-diffusion-xl-refiner-0.9.

Note that these are large models (12 GB each) so be prepared to wait a while.

To use the installed models enter the Node Editor (inverted "Y" in the left-hand panel) and upload either the SDXL base-only or SDXL base+refiner invocation graphs. This will load and display a flow diagram showing the steps in generating an SDXL image.

Ensure that the SDXL Model Loader (leftmost column, bottom) is set to load the SDXL base model on your system, and that the SDXL Refiner Model Loader (third column, top) is set to load the SDXL refiner model on your system. Find the nodes that contain the example prompt and style ("bluebird in a sakura tree" and "chinese classical painting") and replace them with the prompt and style of your choice. Then press the Invoke button. If all goes well, an image will be generated and added to the image gallery.

Be aware that SDXL support is an experimental feature and is not 100% stable. When designing your own SDXL pipelines, be aware that there are certain settings that will have a disproportionate effect on image quality. In particular, the latents decode VAE step must be run at fp32 precision (using a slider at the bottom of the VAE node), and that images will change dramatically as the denoising threshold used by the refiner is adjusted.

Also be aware that SDXL requires at least 8 GB of VRAM in order to render 1024x1024 images. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0

What's Missing in v3.0.0

Some features are missing or not quite working yet. These include:

  • SDXL models can only be used in the node editor, and not in the text2img, img2img or unified canvas panels.
  • A migration path to import 2.3-generated images into the 3.0 image gallery
  • Diffusers-style LoRA files (with a HuggingFace repository ID) can be imported but do not run. There are very few of these models and they will not be supported at release time.
  • Various minor glitches in image gallery behavior.

The following 2.3 features are not available:

  • Variation generation (may be added in time for the final release)
  • Perlin Noise (will likely not be added)
  • Noise Threshold (available through Node Editor)
  • Symmetry (will likely not be added)
  • Seamless tiling (will likely not be added)
  • Face restoration (no longer needed, will not be added)

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

InvokeAI-installer-v3.0.0+b9.zip

Upgrading in place

All users can upgrade from previous beta versions using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select "Manually enter the tag name for the version you wish to update to" option [3]
  5. When prompted for the tag, enter v3.0.0+b9
  6. When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following command:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0+b9.zip" --use-pep517 --upgrade

(Replace v3.0.0+b9 with the current version number.

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models will configuration files will be backed up.)

Upgrading using pip

Once 3.0.0 is released (out of alpha and beta), developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==3.0.0+b8

To upgrade to an xformers version if you are not currently using xformers, use:

pip install --use-pep517 --upgrade InvokeAI[xformers]

You can see which versions are available by going to The PyPI InvokeAI Project Page

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Getting a Stack Trace for Bug Reporting

If you are getting the message "Server Error" in the web interface, you can help us track down the bug by getting a stack trace from the failed operation. This involves several steps. Please see this Discord thread for a step-by-step guide to generating stack traces.

Development Roadmap

If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.

Detailed Change Log

New Contributors

Full Changelog: v2.3.0...v3.0.0+a3

Don't miss a new InvokeAI release

NewReleases is sending notifications on new releases.