github invoke-ai/InvokeAI v6.9.0rc1

latest releases: v6.9.0, v6.9.0rc3, v6.9.0rc2...
pre-release2 days ago

This release focuses on improvements to Invoke's Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.

On first run after installing this release, Invoke will do some data migrations:

  • Run-of-the mill database updates.
  • Update some model records to work with internal Model Manager changes, described below.
  • Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model's UUID. Models outside the Invoke-managed models directory are not moved.

If you see any errors or run into any problems, please create a GH issue or ask for help in the #new-release-discussion channel of the Invoke discord.

Model Installation Improvements

Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.

Unknown Models

Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.

As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.

If the model still doesn't work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.

Invoke-managed Models Directory

Previously, as a relic of times long past, Invoke's internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn't have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.

As of this release, Invoke's internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.

On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won't be touched.

We understand this change may seem user-unfriendly at first, but there are good reasons for it:

  • This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
  • It reinforces that the internal models directory is Invoke-managed:
    • Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
    • Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
  • It obviates the need to move models around when changing their type and base.

Refactored Model Identification system

Several months ago, we started working on a new API to improve model identification (aka "probing" or "classification"). This process involves analyzing model files to determine what kind of model it is.

As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.

Model Identification Test Suite

Besides the business logic improvements, model identification is now fully testable!

When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.

Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.

This allows us to strip out the weights from model files, leaving only the model's "skeleton" as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

Full Changelog: v6.8.1...v6.9.0rc1

Don't miss a new InvokeAI release

NewReleases is sending notifications on new releases.