Release Date: March 12, 2026
AudioMuse-AI v0.9.0 introduces the AudioMuse-AI DCLAP model, which replaces CLAP for text search functionality.
DCLAP is significantly faster and has been tested at about 5–6x faster on a Raspberry Pi 5 (8GB RAM + SSD), making semantic text search practical even on lower-power or older hardware.
As with the previous implementation, text-search embeddings can still be disabled by setting the env varCLAP_ENABLED to false.
⚠️ Important: this version requires a clean database and a new full analysis of your audio library.
Databases generated with previous versions are not compatible.
This release also replaces the MTG Essentia Musicnn model with the original Musicnn model.
The original model does not include the classification head (used for attributes such as danceability, happiness, sadness, etc.), as these are now computed using DCLAP embeddings. Additionally, the original Musicnn model is distributed under the ISC license, which is more permissive and better aligned with the goals of this project.
What's Changed
- Musicnn model by @NeptuneHub in #357
- feat: replace Flask dev server with Gunicorn WSGI for production by @12somyasahu in #356
- app.py resolving merge conflict by @NeptuneHub in #358
- Improve search behavior on "playlist from similar song" feature by @sfredo in #335
- DCLAP by @NeptuneHub in #359
New Contributors
- @12somyasahu made their first contribution in #356
- @sfredo made their first contribution in #335
Full Changelog: v0.8.14...v0.9.0