github invoke-ai/InvokeAI v3.0.0rc1
InvokeAI 3.0.0 Release Candidate 1

latest releases: v5.0.0.rc1, v5.0.0.a8, v5.0.0.a7...
pre-release14 months ago

InvokeAI Version 3.0.0

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.

InvokeAI version 3.0.0 represents a major advance in functionality and ease compared with the last official release, 2.3.5.

  • What's New
  • Installation and Upgrading
  • Getting Started with SDXL
  • Recent Bug Fixes
  • What's Missing
  • Getting Help
  • Contributing
  • Detailed Change Log

Please use the 3.0.0 release discussion thread, for comments on this release candidate including feature requests, enhancement suggestions and other non-critical issues. Report bugs to InvokeAI Issues. For interactive support with the development team, contributors and user community, please join the InvokeAI Discord Server.

What's New in v3.0.0

Quite a lot has changed, both internally and externally

Web User Interface:

  • A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
  • A Dynamic Prompts interface that lets you generate combinations of prompt elements.
  • Preliminary support for Stable Diffusion XL the latest iteration of Stability AI's image generation models.
  • A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
  • The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
  • A graphical Nodes Editor that lets you design and execute complex image generation operations using a point-and-click interface.
  • Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM.
  • Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
  • Long prompt support (>77 tokens).
  • Memory and speed improvements.

The WebUI can now be launched from the command line using either invokeai-web (preferred new way) or invokeai --web (deprecated old way).

Command Line Tool

The previous command line tool has been removed and replaced with a new developer-oriented tool invokeai-node-cli that allows you to experiment with InvokeAI nodes.

Installer

The console-based model installer, invokeai-model-install has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.

Internal

Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as "nodes", which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.


Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

InvokeAI-installer-v3.0.0rc1.zip

Upgrading in place

All users can upgrade from the 3.0 beta releases using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select "Manually enter the tag name for the version you wish to update to" option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following command:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0rc1.zip" --use-pep517 --upgrade

(Replace v3.0.0rc1 with the current version number.

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Once 3.0.0 is released, developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==3.0.0rc1

To upgrade to an xformers version if you are not currently using xformers, use:

pip install --use-pep517 --upgrade InvokeAI[xformers]

You can see which versions are available by going to The PyPI InvokeAI Project Page


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. As of the current time (July 2023) SDXL had not been officially released, but a pre-release 0.9 version is widely available. InvokeAI provides support for SDXL image generation via its Nodes Editor, a user interface that allows you to create and customize complex image generation pipelines using a drag-and-drop interface. Currently SDXL generation is not directly supported in the text2image, image2image, and canvas panels, but we expect to add this feature as soon as SDXL 1.0 is officially released.

SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete. It is not clear whether SDXL 1.0, when it is released, will require the refiner.

To experiment with SDXL, you'll need the "base" and "refiner" models. Currently a beta version of SDXL, version 0.9, is available from HuggingFace for research purposes. To obtain access, you will need to register with HF at https://huggingface.co/join, obtain an access token at https://huggingface.co/settings/tokens, and add the access token to your environment. To do this, run the InvokeAI launcher script, activate the InvokeAI virtual environment with option [8], and type the command huggingface-cli login. Paste in your access token from HuggingFace and hit return (the token will not be echoed to the screen).

Now navigate to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9 and fill out the access request form for research use. You will be granted instant access to download. Next launch the InvokeAI console-based model installer by selecting launcher option [5] or by activating the virtual environment and giving the command invokeai-model-install. In the STARTER MODELS section, select the checkboxes for stable-diffusion-xl-base-0-9 and stable-diffusion-xl-refiner-0-9. Press Apply Changes to install the models and keep the installer running, or Apply Changes and Exit to install the models and exit back to the launcher menu.

Alternatively you can install these models from the Web UI Model Manager (cube at the bottom of the left-hand panel) and navigate to Import Models. In the field labeled Location type in the repo id of the base model, which is stabilityai/stable-diffusion-xl-base-0.9. Press Add Model and wait for the model to download and install (the page will freeze while this is happening). After receiving confirmation that the model installed, repeat with stabilityai/stable-diffusion-xl-refiner-0.9.

Note that these are large models (12 GB each) so be prepared to wait a while.

To use the installed models you will need to activate the Node Editor, an advanced feature of InvokeAI. Go to the Settings (gear) icon on the upper right of the Web interface, and activate "Enable Nodes Editor". After reloading the page, an inverted "Y" will appear on the left-hand panel. This is the Node Editor.

Enter the Node Editor and click the Upload button to upload either the SDXL base-only or SDXL base+refiner pipelines. This will load and display a flow diagram showing the (many complex) steps in generating an SDXL image.

Ensure that the SDXL Model Loader (leftmost column, bottom) is set to load the SDXL base model on your system, and that the SDXL Refiner Model Loader (third column, top) is set to load the SDXL refiner model on your system. Find the nodes that contain the example prompt and style ("bluebird in a sakura tree" and "chinese classical painting") and replace them with the prompt and style of your choice. Then press the Invoke button. If all goes well, an image will eventually be generated and added to the image gallery. Unlike standard rendering, intermediate images are not (yet) displayed during rendering.

Be aware that SDXL support is an experimental feature and is not 100% stable. When designing your own SDXL pipelines, be aware that there are certain settings that will have a disproportionate effect on image quality. In particular, the latents decode VAE step must be run at fp32 precision (using a slider at the bottom of the VAE node), and that images will change dramatically as the denoising threshold used by the refiner is adjusted.

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0

Recent bug fixes

  • The node editor has been made an advanced option, and can be made visible by activating it in the Settings dialogue.
  • Upload to the unified canvas is working again.
  • Crashes on platforms using CPU rendering due to mixed half and full-precision arithmetic have been fixed.
  • Stable Diffusion XL (SDXL) 0.9 support in the node editor. See Getting Started with SDXL
  • Stable Diffusion XL models added to the optional starter models presented by the model installer
  • Memory and performance improvements for XL models (thanks to @StAlKeR7779)
  • Image upscaling using the latest version of RealESRGAN (fixed thanks to @psychedelicious )
  • VRAM optimizations to allow SDXL to run on 8 GB VRAM environments.
  • Feature-complete Model Manager in the Web GUI to provide online model installation, configuration and deletion.
  • Recommended LoRA and ControlNet models added to model installer.
  • UI tweaks, including updated hotkeys.
  • Translation and tooltip fixes
  • Documentation fixes, including description of all options in invokeai.yaml
  • Improved support for half-precision generation on Macintoshes.
  • Improved long prompt support.
  • Fix "Package 'invokeai' requires a different Python:" error

What's Missing in v3.0.0

Relative to version 2.3.5, some features are missing or not quite working yet. These include:

  • SDXL models can only be used in the node editor, and not in the text2img, img2img or unified canvas panels.
  • No easy migration path to import 2.3-generated images into the 3.0 image gallery; users will have to upload to the new gallery interface manually.
  • Diffusers-style LoRA files (with a HuggingFace repository ID) can be imported but do not run. There are very few of these models and they will not be supported at release time.
  • Only diffusers-style ControlNet files (with a HuggingFace repository ID) can be applied.

The following 2.3 features are not available:

  • Variation generation (slated for addition in v3.1)
  • Perlin Noise (not widely used; permanently removed)
  • Noise Threshold (only available through the Node Editor, slated for addition in v3.1)
  • Symmetry (not widely used; permanently removed)
  • Seamless tiling (may be restored in v3.1)
  • Face restoration (obsoleted by newer models; permanently removed)

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Getting a Stack Trace for Bug Reporting

If you are getting the message "Server Error" in the web interface, you can help us track down the bug by getting a stack trace from the failed operation. This involves several steps. Please see this Discord thread for a step-by-step guide to generating stack traces.


Contributing to InvokeAI

If you are looking for a stable version of InvokeAI, either use this release or install from the v2.3 source code branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.


Change Log

New Contributors

Full Changelog: v2.3.5.post2...v3.0.0rc1

Don't miss a new InvokeAI release

NewReleases is sending notifications on new releases.