This is a patch release to tag the new llama.cpp version which fixes incompatibilities with Qwen 3 coder.
What's Changed
Other Changes
- docs: ⬆️ update docs version mudler/LocalAI by @localai-bot in #8611
- feat(traces): Add backend traces by @richiejp in #8609
- chore: ⬆️ Update ggml-org/llama.cpp to
b908baf1825b1a89afef87b09e22c32af2ca6548by @localai-bot in #8612 - chore: drop bark.cpp leftovers from pipelines by @mudler in #8614
- fix: merge openresponses messages by @mudler in #8615
- chore: ⬆️ Update ggml-org/llama.cpp to
ba3b9c8844aca35ecb40d31886686326f22d2214by @localai-bot in #8613
Full Changelog: v3.12.0...v3.12.1