github invoke-ai/InvokeAI v2.3.3-rc3
InvokeAI Version 2.3.3 - A Stable Diffusion Toolkit

latest releases: v5.4.1rc2, v5.4.1rc1, v5.4.0...
pre-release19 months ago

We are pleased to announce a bugfix update to InvokeAI with the release of version 2.3.3.

  • What's New
  • Installation and Upgrading
  • Getting Help
  • Known Bugs
  • Detailed Change Log
  • Acknowledgements

What's New in 2.3.3

This is a bugfix and minor feature release.

Bugfixes

Since version 2.3.2 the following bugs have been fixed:

Bugs

  1. When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
  2. Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
  3. The batch script log file names have been fixed to be compatible with Windows.
  4. Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
  5. An infinite loop when opening the developer's console from within the invoke.sh script has been corrected.

Enhancements

  1. It is now possible to load and run several community-contributed SD-2.0 based models, including the infamous "Illuminati" model.
  2. The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI embeddings directory.
  3. If no --model is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
  4. On Linux systems, the invoke.sh launcher now uses a prettier console-based interface. To take advantage of it, install the dialog package using your package manager (e.g. sudo apt install dialog).
  5. When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this model:
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.3 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.3-rc3.zip

To update from 2.3.1 or 2.3.2 you may alternatively use the "update" option (choice 6) in the invoke.sh/invoke.bat launcher script. Alternatively, you may use the installer. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.3. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.3

These are known bugs in the release.

  1. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
  2. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

What's Changed

  • Enhance model autodetection during import by @lstein in #3043
  • Correctly load legacy checkpoint files built on top of SD 2.0/2.1 bases, such as Illuminati 1.1 by @lstein in #3058
  • Keep torch version at 1.13.1 by @JPPhoto in #2985
  • Fix textual inversion documentation and code by @lstein in #3015
  • fix corrupted outputs/.next_prefix file by @lstein in #3020
  • fix batch generation logfile name to be compatible with Windows OS by @lstein in #3018
  • Security patch: Scan all pickle files, including VAEs; default to safetensor loading by @lstein in #3011
  • prevent infinite loop when launching developer's console by @lstein in #3016
  • Prettier console-based frontend for invoke.sh on Linux systems with "dialog" installed.

Full Changelog: v2.3.2.post1...v2.3.3-rc1

Acknowledgements

Many thanks to @psychedelicious, @blessedcoolant (Vic), @JPPhoto (Jonathan Pollack), JoshuaKimsey and our crack team of Discord moderators, @gogurtenjoyer and @whosawhatsis, for all their contributions to this release.

Full Changelog: v2.3.3-rc1...v2.3.3-rc2

Don't miss a new InvokeAI release

NewReleases is sending notifications on new releases.