github invoke-ai/InvokeAI 2.2.0-rc1
InvokeAI Version 2.2.0

latest releases: v5.4.1rc2, v5.4.1rc1, v5.4.0...
pre-release23 months ago

With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.

Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.

Update 30 November -

  • The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.

  • Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!

  • Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!

  • 1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.

  • Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.

  • DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.

For those installing InvokeAI for the first time, please use this recipe:
For automated installation, open up the "Assets" section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.
For manual installation download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates.
Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI
Follow the instructions in Manual Installation.

Upgrading
For those wishing to upgrade from an earlier version, please use this recipe:
Download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates.
Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull
Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:

environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU

Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:

Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml

Replace xxx and yyy with the appropriate OS and GPU codes.

Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml

When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory.
Now run the following commands in the InvokeAI directory.

conda update
conda activate invokeai
python scripts/preload_models.py

Additional installation information, including recipes for installing without Conda, can be found in Manual Installation

Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide.

The most important thing is to know about contributing code is to make your pull request against the "development" branch, and not against "main". This will help keep public breakage to a minimum and will allow you to propose more radical
changes.

Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.

Don't miss a new InvokeAI release

NewReleases is sending notifications on new releases.