github invoke-ai/InvokeAI v5.6.0rc2

pre-releaseone day ago

This release brings major improvements to Invoke's memory management, plus a few minor fixes.

Memory Management Improvements (aka Low-VRAM mode)

The goal of these changes is to allow users with low-VRAM GPUs to run even the beefiest models, like the 24GB unquantised FLUX dev model.

Despite the focus on low-VRAM GPUs and the colloquial name "Low-VRAM mode", most users benefit from these improvements to Invoke's memory management.

Low-VRAM mode works on systems with dedicated GPUs (Nvidia GPUs on Windows/Linux and AMD GPUs on Linux). It allows you to generate even if your GPU doesn't have enough VRAM to hold full models.

Low-VRAM mode involves 3 features, each of which can be configured or fine-tuned:

  • Partial model loading
  • Dynamic RAM and VRAM cache sizes
  • Working memory

Most users should only need to enable partial loading by adding this line to your invokeai.yaml file:

enable_partial_loading: true

🚨 Windows users should also disable the Nvidia sysmem fallback.

For more details and instructions for fine-tuning, see the Low-VRAM mode docs.

Thanks to @RyanJDick for designing and implementing these improvements!

Changes since previous release candidate (v5.6.0rc1)

  • Fix some model loading errors that occurred in edge cases.
  • Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
  • Deprecate the ram and vram settings in favor of new max_cache_ram_gb and max_cache_vram_gb settings. This is eases the upgrade path for users who had manually configured ram and vram in the past.
  • Fix (maybe, hopefully) the app scrolling off screen when run via launcher.

The launcher itself has also been updated to fix a handful of issues, including requiring an install every time you start the launcher and systems with AMD GPUs using CPU.

Other Changes

  • Fixed issue where excessively long board names could cause performance issues.
  • Reworked error handling when installing models from a URL.
  • Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
  • Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
  • Updated first run screen and OOM error toast with links to Low-VRAM mode docs.
  • Fixed link to Scale setting's support docs.
  • Tidied some unused variables. Thanks @rikublock!
  • Added typegen check to CI pipeline. Thanks @rikublock!
  • Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
  • Updated installation-related docs (quick start, manual install, dev install).
  • Add Low-VRAM mode docs.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you already have the launcher, you can use it to update your existing install.

We've just updated the launcher to v1.2.1 with a handful of fixes. To update the launcher itself, download the latest version from the quick start guide - the download links are kept up to date.

Legacy Scripts (not recommended!)

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What's Changed

New Contributors

Full Changelog: v5.5.0...v5.6.0rc2

Don't miss a new InvokeAI release

NewReleases is sending notifications on new releases.