github mudler/LocalAI v3.7.0

2 days ago




Welcome to LocalAI 3.7.0 πŸ‘‹

This release introduces Agentic MCP support with full WebUI integration, a brand-new neutts TTS backend, fuzzy model search, long-form TTS chunking for chatterbox, and a complete WebUI overhaul.

We’ve also fixed critical bugs, improved stability, and enhanced compatibility with OpenAI’s APIs.


πŸ“Œ TL;DR – What’s New in LocalAI 3.7.0

Feature Summary
πŸ€– Agentic MCP Support (WebUI-enabled) Build AI agents that use real tools (web search, code exec). Fully-OpenAI compatible and integrated into the WebUI.
πŸŽ™οΈ neutts TTS Backend (Neuphonic-powered) Generate natural, high-quality speech with low-latency audio β€” ideal for voice assistants.
πŸ–ΌοΈ WebUI enhancements Faster, cleaner UI with real-time updates and full YAML model control.
πŸ’¬ Long-Text TTS Chunking (Chatterbox) Generate natural-sounding long-form audio by intelligently splitting text and preserving context.
🧩 Advanced Agent Controls Fine-tune agent behavior with new options for retries, reasoning, and re-evaluation.
πŸ“Έ New Video Creation Endpoint We now support the OpenAI-compatible /v1/videos endpoint for text-to-video generation.
🐍 Enhanced Whisper compatibility Whisper.cpp is now supported on various CPU variants (AVX, AVX2, etc.) to prevent illegal instruction crashes.
πŸ” Fuzzy Gallery Search Find models in the gallery even with typos (e.g., gema finds gemma).
πŸ“¦ Easier Model & Backend Management Import, edit, and delete models directly via clean YAML in the WebUI.
▢️ Realtime Example Check out the new realtime voice assistant example (multilingual).
⚠️ Security, Stability & API Compliance Fixed critical crashes, deadlocks, session events, OpenAI compliance, and JSON schema panics.
🧠 Qwen 3 VL Support for Qwen 3 VL with llama.cpp/gguf models

πŸ”₯ What’s New in Detail

πŸ€– Agentic MCP Support – Build Intelligent, Tool-Using AI Agents

We're proud to announce full Agentic MCP support a feature for building AI agents that can reason, plan, and execute actions using external tools like web search, code execution, and data retrieval. You can use standard chat/completions endpoint, but powered by an agent in the background.

Full documentation is available here

βœ… Now in WebUI: A dedicated toggle appears in the chat interface when a model supports MCP. Just click to enable agent mode.

✨ Key Features:

  • New Endpoint: POST /mcp/v1/chat/completions (OpenAI-compatible).
  • Flexible Tool Configuration:
    mcp:
      stdio: |
        {
          "mcpServers": {
            "duckduckgo": {
              "command": "docker",
              "args": ["run", "-i", "--rm", "ghcr.io/mudler/mcps/duckduckgo:master"]
            }
          }
        }
  • Advanced Agent Control via agent config:
    agent:
      max_attempts: 3
      max_iterations: 5
      enable_reasoning: true
      enable_re_evaluation: true
    • max_attempts: Retry failed tool calls up to N times.
    • max_iterations: Limit how many times the agent can loop through reasoning.
    • enable_reasoning: Allow step-by-step thought processes (e.g., chain-of-thought).
    • enable_re_evaluation: Re-analyze decisions when tool results are ambiguous.

You can find some plug-n-play MCPs here: https://github.com/mudler/MCPs
Under the hood, MCP functionality is powered by https://github.com/mudler/cogito

πŸ–ΌοΈ WebUI enhancements

WebUI had a major overhaul:

  • The chat view now has an MCP toggle in the chat for models that have mcp settings enabled in the model config file.
  • The Editor mask of the model has now been simplified to show/edit the YAML settings of the model
  • More reactive, dropped HTMX in favor of Alpine.js and vanilla javascript
  • Various fixes including deletion of models

πŸŽ™οΈ Introducing neutts TTS Backend – Natural Speech, Low Latency

Say hello to neutts a new, lightweight TTS backend powered by Neuphonic, delivering high-quality, natural-sounding speech with minimal overhead.

πŸŽ›οΈ Setup Example

name: neutts-english
backend: neutts
parameters:
  model: neuphonic/neutts-air
tts:
  audio_path: "./output.wav"
  streaming: true
options:
  # text transcription of the provided audio file
  - ref_text: "So I'm live on radio..."
known_usecases:
  - tts

🐍 Whisper.cpp enhancements

whisper.cpp CPU variants are now available for:

  • avx
  • avx2
  • avx512
  • fallback (no optimized instructions available)

These variants are optimized for specific instruction sets and reduce crashes on older or non-AVX CPUs.

πŸ” Smarter Gallery Search: Fuzzy & Case-Insensitive Matching

Searching for gemma now finds gemma-3, gemma2, etc. β€” even with typos like gemaa or gema.

🧩 Improved Tool & Schema Handling – No More Crashes

We’ve fixed multiple edge cases that caused crashes or silent failures in tool usage.

βœ… Fixes:

  • Nullable JSON Schemas: "type": ["string", "null"] now works without panics.
  • Empty Parameters: Tools with missing or empty parameters now handled gracefully.
  • Strict Mode Enforcement: When strict_mode: true, the model must pick a tool β€” no more skipping.
  • Multi-Type Arrays: Safe handling of ["string", "null"] in function definitions.

πŸ”„ Interaction with Grammar Triggers: strict_mode and grammar rules work together β€” if a tool is required and the function definition is invalid, the server returns a clear JSON error instead of crashing.

πŸ“Έ New Video Creation Endpoint: OpenAI-Compatible

LocalAI now supports OpenAI’s /v1/videos endpoint for generating videos from text prompts.

πŸ“Œ Usage Example:

curl http://localhost:8080/v1/videos \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-..." \
  -d '{
    "model": "sora",
    "prompt": "A cat walking through a forest at sunset",
    "size": "1024x576",
  }'

🧠 Qwen 3 VL in llama.cpp

Support has been added for Qwen 3 VL in llama.cpp. We have updated llama.cpp to latest! As a reminder, Qwen 3 VL and multimodal models are also compatible with our vLLM and MLX backends. Qwen 3 VL models are already available in the model gallery:

  • qwen3-vl-30b-a3b-instruct
  • qwen3-vl-30b-a3b-thinking
  • qwen3-vl-4b-instruct
  • qwen3-vl-32b-instruct
  • qwen3-vl-4b-thinking
  • qwen3-vl-2b-thinking
  • qwen3-vl-2b-instruct

Note: upgrading the llama.cpp backend is necessary if you already have a LocalAI installation.

πŸš€ (CI) Gallery Updater Agent: Auto-Detect & Suggest New Models

We’ve added an autonomous CI agent that scans Hugging Face daily for new models and opens PRs to update the gallery.

✨ How It Works:

  1. Scans HF for new, trending models
  2. Extracts base model, quantization, and metadata.
  3. Uses cogito (our agentic framework) to assign the model to the correct family and to obtain the model informations.
  4. Opens a PR with:
    • Suggested name, family, and usecases
    • Link to HF model
    • YAML snippet for import

πŸ”§ Critical Bug Fixes & Stability Improvements

Issue Fix Impact
πŸ“Œ WebUI Crash on Model Load Fixed can't evaluate field Name in type string error Models now render even without config files
πŸ” Deadlock in Model Load/Idle Checks Guarded against race conditions during model loading Improved performance under load
πŸ“ž Realtime API Compliance Added session.created event; removed redundant conversation.created Works with VoxInput, OpenAI clients, and more
πŸ“₯ MCP Response Formatting Output wrapped in message field Matches OpenAI spec β€” better client compatibility
πŸ›‘ JSON Error Responses Now return clean JSON instead of HTML Scripts and libraries no longer break on auth failures
πŸ”„ Session Registration Fixed initial MCP calls failing due to cache issues Reliable first-time use
🎧 kokoro TTS Returns full audio, not partial Better for long-form TTS

πŸš€ The Complete Local Stack for Privacy-First AI

LocalAI Logo

LocalAI

The free, Open Source OpenAI alternative. Acts as a drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required.

Link: https://github.com/mudler/LocalAI

LocalAGI Logo

LocalAGI

A powerful Local AI agent management platform. Serves as a drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI.

Link: https://github.com/mudler/LocalAGI

LocalRecall Logo

LocalRecall

A RESTful API and knowledge base management system providing persistent memory and storage capabilities for AI agents. Designed to work alongside LocalAI and LocalAGI.

Link: https://github.com/mudler/LocalRecall


❀️ Thank You!

A huge THANK YOU to our growing community! With over 35,000 stars, LocalAI is a true FOSS movement β€” built by people, for people, with no corporate backing.

If you love privacy-first AI and open source, please:

  • βœ… Star the repo
  • πŸ’¬ Contribute code, docs, or feedback
  • πŸ“£ Share with others

Your support keeps this stack alive and evolving!


βœ… Full Changelog

πŸ“‹ Click to expand full changelog

What's Changed

Bug fixes πŸ›

  • fix(chatterbox): chunk long text by @mudler in #6407
  • fix(grammars): handle empty parameters on object types by @mudler in #6409
  • fix(mcp): register sessions by @mudler in #6429
  • fix(llama.cpp): correctly set grammar triggers by @mudler in #6432
  • fix(mcp): make responses compliant to OpenAI APIs by @mudler in #6436
  • fix(ui): models without config don't have a .Name field by @mudler in #6438
  • fix(realtime): Add transcription session created event, match OpenAI behavior by @richiejp in #6445
  • fix: guard from potential deadlock with requests in flight by @mudler in #6484
  • fix: handle multi-type arrays in JSON schema to prevent panic by @robert-cronin in #6495
  • fix: properly terminate llama.cpp kv_overrides array with empty key + updated doc by @blob42 in #6672
  • fix: llama dockerfile make package by @blob42 in #6694
  • feat: return complete audio for kokoro by @lukasdotcom in #6842

Exciting New Features πŸŽ‰

  • feat: Add Agentic MCP support with a new chat/completion endpoint by @mudler in #6381
  • fix: add strict mode check for no action function by @mudler in #6294
  • feat: add agent options to model config by @mudler in #6383
  • feat(ui): add button to enable Agentic MCP by @mudler in #6400
  • feat(api): support both /v1 and not on openai routes by @mudler in #6403
  • feat(ui): display in index when a model supports MCP by @mudler in #6406
  • feat(neutts): add backend by @mudler in #6404
  • feat(ui): use Alpine.js and drop HTMX by @mudler in #6418
  • chore: change color palette such as is closer to the logo by @mudler in #6423
  • chore(ui): simplify editing and importing models via YAML by @mudler in #6424
  • chore(api): return json errors by @mudler in #6428
  • chore(ui): display models and backends in tables by @mudler in #6430
  • feat(ci): add gallery updater agent by @mudler in #6467
  • feat(gallery): add fuzzy search by @mudler in #6481
  • chore(gallery search): fuzzy with case insentivie by @mudler in #6490
  • feat(ui): add system backend metadata and deletion in index by @mudler in #6546
  • feat(api): OpenAI video create enpoint integration by @gmaOCR in #6777
  • feat: add CPU variants for whisper.cpp by @mudler in #6855
  • feat: do also text match by @mudler in #6891

🧠 Models

  • chore(model gallery): add lemon07r_vellummini-0.1-qwen3-14b by @mudler in #6386
  • chore(model gallery): add liquidai_lfm2-350m-extract by @mudler in #6387
  • chore(model gallery): add liquidai_lfm2-1.2b-extract by @mudler in #6388
  • chore(model gallery): add liquidai_lfm2-1.2b-rag by @mudler in #6389
  • chore(model gallery): add liquidai_lfm2-1.2b-tool by @mudler in #6390
  • chore(model gallery): add liquidai_lfm2-350m-math by @mudler in #6391
  • chore(model gallery): add liquidai_lfm2-8b-a1b by @mudler in #6414
  • chore(model gallery): add gliese-4b-oss-0410-i1 by @mudler in #6415
  • chore(model gallery): add qwen3-deckard-large-almost-human-6b-i1 by @mudler in #6416
  • chore(model gallery): add ai21labs_ai21-jamba-reasoning-3b by @mudler in #6417
  • chore(ui): skip duplicated entries in search list by @mudler in #6425
  • chore(model gallery): add yanolja_yanoljanext-rosetta-12b-2510 by @mudler in #6442
  • chore(model gallery): add agentflow_agentflow-planner-7b by @mudler in #6443
  • chore(model gallery): add gustavecortal_beck by @mudler in #6444
  • chore(model gallery): add qwen3-4b-ra-sft by @mudler in #6458
  • chore(model gallery): add demyagent-4b-i1 by @mudler in #6459
  • chore(model gallery): add boomerang-qwen3-2.3b by @mudler in #6460
  • chore(model gallery): add boomerang-qwen3-4.9b by @mudler in #6461
  • gallery: πŸ€– add new models via gallery agent by @localai-bot in #6478
  • gallery: πŸ€– add new models via gallery agent by @localai-bot in #6480
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6501
  • chore(model gallery): add mira-v1.7-27b-i1 by @mudler in #6503
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6504
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6507
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6512
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6515
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6516
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6519
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6522
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6524
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6534
  • chore(model gallery): πŸ€– add new models via gallery agent by @localai-bot in #6536
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6557
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6566
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6581
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6597
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6636
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6640
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6646
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6658
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6664
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6691
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6697
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6706
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6721
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6767
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6776
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6784
  • chore(model gallery): add allenai_olmocr-2-7b-1025 by @mudler in #6797
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6799
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6854
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6862
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6863
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6864
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6879
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6884
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6908
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6910
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6911
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6919
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6921
  • chore(model gallery): πŸ€– add 1 new models via gallery agent by @localai-bot in #6940
  • chore(model gallery): add qwen3-vl-30b-a3b-instruct by @mudler in #6960
  • chore(model gallery): add huihui-qwen3-vl-30b-a3b-instruct-abliterated by @mudler in #6961
  • chore(model gallery): add qwen3-vl-30b-a3b-thinking by @mudler in #6962
  • chore(model gallery): add qwen3-vl-4b-instruct by @mudler in #6963
  • chore(model gallery): add qwen3-vl-32b-instruct by @mudler in #6964
  • chore(model gallery): add qwen3-vl-4b-thinking by @mudler in #6965
  • chore(model gallery): add qwen3-vl-2b-thinking by @mudler in #6966
  • chore(model gallery): add qwen3-vl-2b-instruct by @mudler in #6967

πŸ“– Documentation and examples

πŸ‘’ Dependencies

  • chore(deps): bump actions/stale from 10.0.0 to 10.1.0 by @dependabot[bot] in #6392
  • chore(deps): bump github.com/rs/zerolog from 1.33.0 to 1.34.0 by @dependabot[bot] in #6274
  • chore(deps): bump github.com/nikolalohinski/gonja/v2 from 2.3.2 to 2.4.1 by @dependabot[bot] in #6394
  • chore(deps): bump github.com/docker/go-connections from 0.5.0 to 0.6.0 by @dependabot[bot] in #6393
  • chore: update cogito and simplify MCP logics by @mudler in #6413
  • chore(deps): bump github.com/docker/docker from 28.3.3+incompatible to 28.5.0+incompatible by @dependabot[bot] in #6399
  • chore(deps): bump github.com/multiformats/go-multiaddr from 0.16.0 to 0.16.1 by @dependabot[bot] in #6277
  • chore(deps): bump github.com/quic-go/quic-go from 0.54.0 to 0.54.1 in the go_modules group across 1 directory by @dependabot[bot] in #6431
  • chore(deps): bump github/codeql-action from 3 to 4 by @dependabot[bot] in #6451
  • chore(deps): bump github.com/containerd/containerd from 1.7.27 to 1.7.28 by @dependabot[bot] in #6448
  • chore(deps): bump github.com/schollz/progressbar/v3 from 3.14.4 to 3.18.0 by @dependabot[bot] in #6446
  • chore(deps): bump dario.cat/mergo from 1.0.1 to 1.0.2 by @dependabot[bot] in #6447
  • chore(deps): bump github.com/ebitengine/purego from 0.8.4 to 0.9.0 by @dependabot[bot] in #6450
  • chore(deps): bump google.golang.org/grpc from 1.67.1 to 1.76.0 by @dependabot[bot] in #6449
  • feat(mcp): add planning and reevaluation by @mudler in #6541
  • chore(deps): bump github.com/prometheus/client_golang from 1.23.0 to 1.23.2 by @dependabot[bot] in #6600
  • chore(deps): bump github.com/tmc/langchaingo from 0.1.13 to 0.1.14 by @dependabot[bot] in #6604
  • chore(deps): bump securego/gosec from 2.22.9 to 2.22.10 by @dependabot[bot] in #6599
  • chore(deps): bump github.com/gpustack/gguf-parser-go from 0.17.0 to 0.22.1 by @dependabot[bot] in #6602
  • chore(deps): bump github.com/onsi/ginkgo/v2 from 2.25.3 to 2.26.0 by @dependabot[bot] in #6601
  • chore(deps): bump github.com/gofrs/flock from 0.12.1 to 0.13.0 by @dependabot[bot] in #6598
  • chore(deps): bump cogito by @mudler in #6785
  • chore(deps): bump github.com/gofiber/contrib/fiberzerolog from 1.0.2 to 1.0.3 by @dependabot[bot] in #6816
  • chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/coqui by @dependabot[bot] in #6822
  • chore(deps): bump mxschmitt/action-tmate from 3.22 to 3.23 by @dependabot[bot] in #6831
  • chore(deps): bump github.com/gofiber/swagger from 1.0.0 to 1.1.1 by @dependabot[bot] in #6825
  • chore(deps): bump github.com/alecthomas/kong from 0.9.0 to 1.12.1 by @dependabot[bot] in #6829
  • chore(deps): bump actions/upload-artifact from 4 to 5 by @dependabot[bot] in #6824
  • chore(deps): bump github.com/klauspost/cpuid/v2 from 2.2.10 to 2.3.0 by @dependabot[bot] in #6821
  • chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/rerankers by @dependabot[bot] in #6819
  • chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/common/template by @dependabot[bot] in #6830
  • chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/bark by @dependabot[bot] in #6826
  • chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/vllm by @dependabot[bot] in #6827
  • chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/exllama2 by @dependabot[bot] in #6836
  • chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/transformers by @dependabot[bot] in #6828
  • chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/diffusers by @dependabot[bot] in #6839
  • chore(deps): bump actions/download-artifact from 5 to 6 by @dependabot[bot] in #6837
  • chore(deps): bump github.com/gofiber/template/html/v2 from 2.1.2 to 2.1.3 by @dependabot[bot] in #6832
  • chore(deps): bump fyne.io/fyne/v2 from 2.6.3 to 2.7.0 by @dependabot[bot] in #6840

Other Changes

  • docs: ⬆️ update docs version mudler/LocalAI by @localai-bot in #6378
  • chore: ⬆️ Update ggml-org/llama.cpp to 128d522c04286e019666bd6ee4d18e3fbf8772e2 by @localai-bot in #6379
  • chore: ⬆️ Update ggml-org/llama.cpp to 86df2c9ae4f2f1ee63d2558a9dc797b98524639b by @localai-bot in #6382
  • feat(swagger): update swagger by @localai-bot in #6384
  • chore: ⬆️ Update ggml-org/llama.cpp to ca71fb9b368e3db96e028f80c4c9df6b6b370edd by @localai-bot in #6385
  • chore: ⬆️ Update ggml-org/llama.cpp to 3df2244df40c67dfd6ad548b40ccc507a066af2b by @localai-bot in #6401
  • chore: ⬆️ Update ggml-org/whisper.cpp to c8223a8548ad64435266e551385fc51aca9ee8ab by @localai-bot in #6402
  • chore: ⬆️ Update ggml-org/llama.cpp to aeaf8a36f06b5810f5ae4bbefe26edb33925cf5e by @localai-bot in #6408
  • chore: ⬆️ Update ggml-org/llama.cpp to 9d0882840e6c3fb62965d03af0e22880ea90e012 by @localai-bot in #6410
  • chore: ⬆️ Update ggml-org/whisper.cpp to 8877dfc11a9322ce1990958494cf2e41c54657eb by @localai-bot in #6411
  • chore: ⬆️ Update ggml-org/whisper.cpp to 98930fded1c06e601a38903607af262f04893880 by @localai-bot in #6420
  • chore(deps): bump llama.cpp to '1deee0f8d494981c32597dca8b5f8696d399b0f2' by @mudler in #6421
  • chore: ⬆️ Update ggml-org/whisper.cpp to 85871a946971955c635f56bca24ea2a37fed6324 by @localai-bot in #6435
  • chore: ⬆️ Update ggml-org/llama.cpp to e60f01d941bc5b7fae62dd57fee4cec76ec0ea6e by @localai-bot in #6434
  • chore: ⬆️ Update ggml-org/llama.cpp to 11f0af5504252e453d57406a935480c909e3ff37 by @localai-bot in #6437
  • chore: ⬆️ Update ggml-org/whisper.cpp to a91dd3be72f70dd1b3cb6e252f35fa17b93f596c by @localai-bot in #6439
  • chore: ⬆️ Update ggml-org/llama.cpp to a31cf36ad946a13b3a646bf0dadf2a481e89f944 by @localai-bot in #6440
  • chore: ⬆️ Update ggml-org/llama.cpp to e60f241eacec42d3bd7c9edd37d236ebf35132a8 by @localai-bot in #6452
  • chore: ⬆️ Update ggml-org/llama.cpp to fa882fd2b1bcb663de23af06fdc391489d05b007 by @localai-bot in #6454
  • chore: ⬆️ Update ggml-org/whisper.cpp to 4979e04f5dcaccb36057e059bbaed8a2f5288315 by @localai-bot in #6462
  • chore: ⬆️ Update ggml-org/llama.cpp to 466c1911ab736f0b7366127edee99f8ee5687417 by @localai-bot in #6463
  • chore: ⬆️ Update ggml-org/llama.cpp to 1bb4f43380944e94c9a86e305789ba103f5e62bd by @localai-bot in #6488
  • chore: ⬆️ Update ggml-org/llama.cpp to 66b0dbcb2d462e7b70ba5a69ee8c3899ac2efb1c by @localai-bot in #6520
  • chore: ⬆️ Update ggml-org/llama.cpp to ee09828cb057460b369576410601a3a09279e23c by @localai-bot in #6550
  • chore: ⬆️ Update ggml-org/llama.cpp to cec5edbcaec69bbf6d5851cabce4ac148be41701 by @localai-bot in #6576
  • chore: ⬆️ Update ggml-org/llama.cpp to 84bf3c677857279037adf67cdcfd89eaa4ca9281 by @localai-bot in #6621
  • chore: ⬆️ Update ggml-org/whisper.cpp to 23c19308d8a5786c65effa4570204a881660ff31 by @localai-bot in #6622
  • Revert "chore(deps): bump securego/gosec from 2.22.9 to 2.22.10" by @mudler in #6638
  • chore: ⬆️ Update ggml-org/llama.cpp to 03792ad93609fc67e41041c6347d9aa14e5e0d74 by @localai-bot in #6651
  • chore: ⬆️ Update ggml-org/llama.cpp to a2e0088d9242bd9e57f8b852b05a6e47843b5a45 by @localai-bot in #6676
  • chore: ⬆️ Update ggml-org/whisper.cpp to 322c2adb753a9506f0becee134a7f75e2a6b5687 by @localai-bot in #6677
  • chore: ⬆️ Update ggml-org/llama.cpp to 0bf47a1dbba4d36f2aff4e8c34b06210ba34e688 by @localai-bot in #6703
  • chore: ⬆️ Update ggml-org/llama.cpp to 55945d2ef51b93821d4b6f4a9b994393344a90db by @localai-bot in #6729
  • chore: ⬆️ Update ggml-org/llama.cpp to 5d195f17bc60eacc15cfb929f9403cf29ccdf419 by @localai-bot in #6757
  • chore: ⬆️ Update ggml-org/llama.cpp to bbac6a26b2bd7f7c1f0831cb1e7b52734c66673b by @localai-bot in #6783
  • feat(swagger): update swagger by @localai-bot in #6834
  • chore: ⬆️ Update ggml-org/whisper.cpp to f16c12f3f55f5bd3d6ac8cf2f31ab90a42c884d5 by @localai-bot in #6835
  • chore: ⬆️ Update ggml-org/llama.cpp to 5a4ff43e7dd049e35942bc3d12361dab2f155544 by @localai-bot in #6841
  • chore: ⬆️ Update ggml-org/whisper.cpp to c62adfbd1ecdaea9e295c72d672992514a2d887c by @localai-bot in #6868
  • chore: ⬆️ Update ggml-org/llama.cpp to 851553ea6b24cb39fd5fd188b437d777cb411de8 by @localai-bot in #6869
  • chore: ⬆️ Update ggml-org/llama.cpp to 3464bdac37027c5e9661621fc75ffcef3c19c6ef by @localai-bot in #6896
  • chore: ⬆️ Update ggml-org/llama.cpp to 16724b5b6836a2d4b8936a5824d2ff27c52b4517 by @localai-bot in #6925
  • chore: ⬆️ Update ggml-org/llama.cpp to 4146d6a1a6228711a487a1e3e9ddd120f8d027d7 by @localai-bot in #6945

New Contributors

Full Changelog: v3.6.0...v3.7.0

Don't miss a new LocalAI release

NewReleases is sending notifications on new releases.