github mudler/LocalAI v2.27.0

4 days ago

🚀 LocalAI v2.27.0


Welcome to another exciting release of LocalAI v2.27.0! We've been working hard to bring you a fresh WebUI experience and a host of improvements under the hood. Get ready to explore new updates!

🔥 AIO Images Updates

Check out the updated models we're now shipping with our All-in-One images:

CPU All-in-One:

  • Text-to-Text: llama3.1
  • Embeddings: granite-embeddings
  • Vision: minicpm

GPU All-in-One:

  • Text-to-Text: localai-functioncall-qwen2.5-7b-v0.5 (our tiniest flagship model!)
  • Embeddings: granite-embeddings
  • Vision: minicpm

💻 WebUI Overhaul!

We've given the WebUI a brand-new look and feel. Have a look at the stunning new interface:

Talk Interface Generate Audio
Screenshot 2025-03-31 at 12-01-36 LocalAI - Talk Screenshot 2025-03-31 at 12-01-29 LocalAI - Generate audio with voice-en-us-ryan-low
Models Overview Generate Images
Screenshot 2025-03-31 at 12-01-20 LocalAI - Models Screenshot 2025-03-31 at 12-31-41 LocalAI - Generate images with flux 1-dev
Chat Interface API Overview
Screenshot 2025-03-31 at 11-57-44 LocalAI - Chat with localai-functioncall-qwen2 5-7b-v0 5 Screenshot 2025-03-31 at 11-57-23 LocalAI API - c2a39e3 (c2a39e3639227cfd94ffffe9f5691239acc275a8)
Login Swarm
Screenshot 2025-03-31 at 12-09-59 Screenshot 2025-03-31 at 12-10-39 LocalAI - P2P dashboard

How to Use

To get started with LocalAI, you can use our container images. Here’s how to run them with Docker:

# CPU only image:
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-cpu

# Nvidia GPU:
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12

# CPU and GPU image (bigger size):
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest

# AIO images (pre-downloads a set of models ready for use, see https://localai.io/basics/container/)
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu

Check out our Documentation for more information.

Key Highlights:

  • Complete WebUI Redesign: A fresh, modern interface with enhanced navigation and visuals.
  • Model Gallery Improvements: Easier exploration with improved pagination and filtering.
  • AIO Image Updates: Smoother deployments with updated models.
  • Stability Fixes: Critical bug fixes in model initialization, embeddings handling, and GPU offloading.

What’s New 🎉

  • Chat Interface Enhancements: Cleaner layout, model-specific UI tweaks, and custom reply prefixes.
  • Smart Model Detection: Automatically links to relevant model documentation based on use.
  • Performance Tweaks: GGUF models now auto-detect context size, and Llama.cpp handles batch embeddings and SIGTERM gracefully.
  • VLLM Config Boost: Added options to disable logging, set dtype, and enforce per-prompt media limits.
  • New model architecture supported: Gemma 3, Mistral, Deepseek

Bug Fixes 🐛

  • Resolved model icon display inconsistencies.
  • Ensured proper handling of generated artifacts without API key restrictions.
  • Optimized CLIP offloading and Llama.cpp process termination.

Stay Tuned!

We have some incredibly exciting features and updates lined up for you. While we can't reveal everything just yet. Keep an eye out for our upcoming announcements – you won't want to miss them!


Do you like the new webui? let us know in the Github discussions!

Enjoy 🚀

Full changelog 👇

👉 Click to expand 👈

What's Changed

Bug fixes 🐛

  • fix: change initialization order of llama-cpp-avx512 to go before avx2 variant by @bhulsken in #4837
  • fix(coqui): pin transformers by @mudler in #4875
  • fix(ui): not all models have an Icon by @mudler in #4913
  • fix(models): unify usecases identifications by @mudler in #4914
  • fix(llama.cpp): correctly handle embeddings in batches by @mudler in #4957
  • fix(routes): do not gate generated artifacts via key by @mudler in #4971
  • fix(clip): do not imply GPU offload by default by @mudler in #5010
  • fix(llama.cpp): properly handle sigterm by @mudler in #5099

Exciting New Features 🎉

  • feat(ui): detect model usage and display link by @mudler in #4864
  • feat(vllm): Additional vLLM config options (Disable logging, dtype, and Per-Prompt media limits) by @TheDropZone in #4855
  • feat(ui): show only text models in the chat interface by @mudler in #4869
  • feat(ui): do also filter tts and image models by @mudler in #4871
  • feat(ui): paginate model gallery by @mudler in #4886
  • feat(ui): small improvements to chat interface by @mudler in #4907
  • feat(ui): improve chat interface by @mudler in #4910
  • feat(ui): improvements to index and models page by @mudler in #4918
  • feat: allow to specify a reply prefix by @mudler in #4931
  • feat(ui): complete design overhaul by @mudler in #4942
  • feat(ui): remove api key handling and small ui adjustments by @mudler in #4948
  • feat(aio): update AIO image defaults by @mudler in #5002
  • feat(gguf): guess default context size from file by @mudler in #5089

🧠 Models

  • chore(model gallery): add ozone-ai_0x-lite by @mudler in #4835
  • chore: update Image generation docs and examples by @mudler in #4841
  • chore(model gallery): add kubeguru-llama3.2-3b-v0.1 by @mudler in #4858
  • chore(model gallery): add allenai_llama-3.1-tulu-3.1-8b by @mudler in #4859
  • chore(model gallery): add nbeerbower_dumpling-qwen2.5-14b by @mudler in #4860
  • chore(model gallery): add nbeerbower_dumpling-qwen2.5-32b-v2 by @mudler in #4861
  • chore(model gallery): add nbeerbower_dumpling-qwen2.5-72b by @mudler in #4862
  • chore(model gallery): add pygmalionai_pygmalion-3-12b by @mudler in #4866
  • chore(model gallery): add open-r1_openr1-qwen-7b by @mudler in #4867
  • chore(model gallery): add sentientagi_dobby-unhinged-llama-3.3-70b by @mudler in #4868
  • chore(model gallery): add internlm_oreal-32b by @mudler in #4872
  • chore(model gallery): add internlm_oreal-deepseek-r1-distill-qwen-7b by @mudler in #4873
  • chore(model gallery): add internlm_oreal-7b by @mudler in #4874
  • chore(model gallery): add smirki_uigen-t1.1-qwen-14b by @mudler in #4877
  • chore(model gallery): add smirki_uigen-t1.1-qwen-7b by @mudler in #4878
  • chore(model gallery): add l3.1-8b-rp-ink by @mudler in #4879
  • chore(model gallery): add pocketdoc_dans-personalityengine-v1.2.0-24b by @mudler in #4880
  • chore(model gallery): add rombo-org_rombo-llm-v3.0-qwen-72b by @mudler in #4882
  • chore(model gallery): add ozone-ai_reverb-7b by @mudler in #4883
  • chore(model gallery): add arcee-ai_arcee-maestro-7b-preview by @mudler in #4884
  • chore(model gallery): add steelskull_l3.3-mokume-gane-r1-70b by @mudler in #4885
  • chore(model gallery): add steelskull_l3.3-cu-mai-r1-70b by @mudler in #4892
  • chore(model gallery): add steelskull_l3.3-san-mai-r1-70b by @mudler in #4893
  • chore(model gallery): add nohobby_l3.3-prikol-70b-extra by @mudler in #4894
  • chore(model gallery): add flux.1dev-abliteratedv2 by @mudler in #4895
  • chore(model gallery): add sicariussicariistuff_phi-line_14b by @mudler in #4901
  • chore(model gallery): add perplexity-ai_r1-1776-distill-llama-70b by @mudler in #4902
  • chore(model gallery): add latitudegames_wayfarer-large-70b-llama-3.3 by @mudler in #4903
  • chore(model gallery): add locutusque_thespis-llama-3.1-8b by @mudler in #4912
  • chore(model gallery): add microsoft_phi-4-mini-instruct by @mudler in #4921
  • chore(model gallery): add ozone-research_chirp-01 by @mudler in #4922
  • chore(model gallery): add ozone-research_0x-lite by @mudler in #4923
  • chore(model gallery): add allenai_olmocr-7b-0225-preview by @mudler in #4924
  • chore(model gallery): add ibm-granite_granite-3.2-8b-instruct by @mudler in #4927
  • chore(model gallery): add ibm-granite_granite-3.2-2b-instruct by @mudler in #4928
  • chore(model gallery): add qihoo360_tinyr1-32b-preview by @mudler in #4929
  • chore(model gallery): add thedrummer_fallen-llama-3.3-r1-70b-v1 by @mudler in #4930
  • chore(model gallery): add steelskull_l3.3-mokume-gane-r1-70b-v1.1 by @mudler in #4933
  • chore(model gallery): update qihoo360_tinyr1-32b-preview by @mudler in #4937
  • chore(model gallery): add l3.3-geneticlemonade-unleashed-70b-i1 by @mudler in #4938
  • chore(model gallery): add boomer_qwen_72b-i1 by @mudler in #4939
  • chore(model gallery): add llama-3.3-magicalgirl-2 by @mudler in #4940
  • chore(model gallery): add azura-qwen2.5-32b-i1 by @mudler in #4941
  • chore(model gallery): add llama-3.1-8b-instruct-uncensored-delmat-i1 by @mudler in #4944
  • chore(model gallery): add lolzinventor_meta-llama-3.1-8b-survivev3 by @mudler in #4945
  • chore(model gallery): add llama-3.3-magicalgirl-2.5-i1 by @mudler in #4946
  • chore(model gallery): add qwen_qwq-32b by @mudler in #4952
  • chore(model gallery): add rombo-org_rombo-llm-v3.1-qwq-32b by @mudler in #4953
  • chore(model gallery): add nomic-embed-text-v1.5 by @mudler in #4955
  • chore(model gallery): add granite embeddings models by @mudler in #4956
  • chore(model gallery): add steelskull_l3.3-electra-r1-70b by @mudler in #4960
  • chore(model gallery): add huihui-ai_qwq-32b-abliterated by @mudler in #4961
  • chore(model gallery): add goppa-ai_goppa-logillama by @mudler in #4962
  • chore(model gallery): add tower-babel_babel-9b-chat by @mudler in #4964
  • chore(model gallery): add llmevollama-3.1-8b-v0.1-i1 by @mudler in #4968
  • chore(model gallery): add opencrystal-l3-15b-v2.1-i1 by @mudler in #4969
  • chore(model gallery): add hyperllama3.1-v2-i1 by @mudler in #4970
  • chore(model gallery): add openpipe_deductive-reasoning-qwen-14b by @mudler in #4994
  • chore(model gallery): add openpipe_deductive-reasoning-qwen-32b by @mudler in #4995
  • chore(model gallery): add thedrummer_gemmasutra-small-4b-v1 by @mudler in #4997
  • chore(model gallery): add open-r1_olympiccoder-32b by @mudler in #4998
  • chore(model gallery): add open-r1_olympiccoder-7b by @mudler in #4999
  • chore(model gallery): add trashpanda-org_qwq-32b-snowdrop-v0 by @mudler in #5000
  • chore(model gallery): add gemma-3-27b-it by @mudler in #5003
  • chore(model gallery): add gemma-3-12b-it by @mudler in #5007
  • chore(model gallery): add gemma-3-4b-it by @mudler in #5008
  • chore(model gallery): add gemma-3-1b-it by @mudler in #5009
  • chore(model gallery): add models/qgallouedec_gemma-3-27b-it-codeforces-sft by @mudler in #5013
  • chore(model gallery): add nousresearch_deephermes-3-mistral-24b-preview by @mudler in #5014
  • chore(model gallery): add nousresearch_deephermes-3-llama-3-3b-preview by @mudler in #5015
  • chore(model gallery): add prithivmlmods_viper-coder-32b-elite13 by @mudler in #5016
  • chore(model gallery): add eurollm-9b-instruct by @mudler in #5017
  • chore(model gallery): add allura-org_bigger-body-70b by @mudler in #5021
  • chore(model gallery): add pocketdoc_dans-sakurakaze-v1.0.0-12b by @mudler in #5023
  • chore(model gallery): add beaverai_mn-2407-dsk-qwqify-v0.1-12b by @mudler in #5024
  • chore(model gallery): add readyart_forgotten-safeword-70b-3.6 by @mudler in #5027
  • chore(model gallery): add mproj files for gemma3 models by @mudler in #5028
  • chore(model gallery): add mlabonne_gemma-3-27b-it-abliterated by @mudler in #5031
  • chore(model gallery): add mlabonne_gemma-3-12b-it-abliterated by @mudler in #5032
  • chore(model gallery): add mlabonne_gemma-3-4b-it-abliterated by @mudler in #5033
  • chore(model gallery): add soob3123_amoral-gemma3-12b by @mudler in #5034
  • chore(model-gallery): ⬆️ update checksum by @localai-bot in #5036
  • chore(model gallery): add mistralai_mistral-small-3.1-24b-instruct-2503 by @mudler in #5039
  • chore(model gallery): add gryphe_pantheon-rp-1.8-24b-small-3.1 by @mudler in #5040
  • chore(model gallery): add nvidia_llama-3_3-nemotron-super-49b-v1 by @mudler in #5041
  • chore(model gallery): add gemma-3-4b-it-uncensored-dbl-x-i1 by @mudler in #5043
  • chore(model gallery): add rootxhacker_apollo-v3-32b by @mudler in #5044
  • chore(model gallery): add samsungsailmontreal_bytecraft by @mudler in #5045
  • chore(model gallery): add soob3123_amoral-gemma3-4b by @mudler in #5046
  • chore(model gallery): add qwen-writerdemo-7b-s500-i1 by @mudler in #5049
  • chore(model gallery): add sao10k_llama-3.3-70b-vulpecula-r1 by @mudler in #5050
  • chore(model gallery): add luvgpt_phi3-uncensored-chat by @mudler in #5051
  • chore(model gallery): add knoveleng_open-rs3 by @mudler in #5054
  • chore(model gallery): add thedrummer_fallen-gemma3-4b-v1 by @mudler in #5055
  • chore(model gallery): add thedrummer_fallen-gemma3-12b-v1 by @mudler in #5056
  • chore(model gallery): add thedrummer_fallen-gemma3-27b-v1 by @mudler in #5057
  • chore(model gallery): add huihui-ai_gemma-3-1b-it-abliterated by @mudler in #5058
  • chore(model gallery): add mawdistical_mawdistic-nightlife-24b by @mudler in #5059
  • chore(model gallery): add sicariussicariistuff_x-ray_alpha by @mudler in #5060
  • chore(model gallery): add fiendish_llama_3b by @mudler in #5061
  • chore(model gallery): add impish_llama_3b by @mudler in #5064
  • chore(model gallery): add eximius_persona_5b by @mudler in #5065
  • chore(model gallery): add dusk_rainbow by @mudler in #5066
  • chore(model gallery): add jdineen_llama-3.1-8b-think by @mudler in #5069
  • chore(model gallery): add helpingai_helpingai3-raw by @mudler in #5070
  • chore(model gallery): add alamios_mistral-small-3.1-draft-0.5b by @mudler in #5071
  • chore(model gallery): add gemma-3-glitter-12b-i1 by @mudler in #5074
  • chore(model gallery): add blacksheep-24b-i1 by @mudler in #5075
  • chore(model gallery): add textsynth-8b-i1 by @mudler in #5076
  • chore(model gallery): add soob3123_amoral-gemma3-12b-v2 by @mudler in #5080
  • chore(model gallery): gemma-3-starshine-12b-i1 by @mudler in #5081
  • chore(model gallery): qwen2.5-14b-instruct-1m-unalign-i1 by @mudler in #5082
  • chore(model gallery): thoughtless-fallen-abomination-70b-r1-v4.1-i1 by @mudler in #5083
  • chore(model gallery): fallen-safeword-70b-r1-v4.1 by @mudler in #5084
  • chore(model gallery): add tarek07_legion-v2.1-llama-70b by @mudler in #5087
  • chore(model gallery): add tesslate_tessa-t1-32b by @mudler in #5088
  • chore(model gallery): add tesslate_tessa-t1-14b by @mudler in #5090
  • chore(model gallery): add tesslate_tessa-t1-7b by @mudler in #5091
  • chore(model gallery): add tesslate_tessa-t1-3b by @mudler in #5092
  • chore(model gallery): add chaoticneutrals_very_berry_qwen2_7b by @mudler in #5093
  • chore(model gallery): add galactic-qwen-14b-exp1 by @mudler in #5096
  • chore(model gallery): add forgotten-abomination-70b-v5.0 by @mudler in #5097
  • chore(model gallery): add hammer2.0-7b by @mudler in #5098

👒 Dependencies

  • chore: ⬆️ Update ggml-org/llama.cpp to 2eea03d86a2d132c8245468c26290ce07a27a8e8 by @localai-bot in #4839
  • chore(deps): Bump edgevpn to v0.30.1 by @mudler in #4840
  • chore: ⬆️ Update ggml-org/llama.cpp to 73e2ed3ce3492d3ed70193dd09ae8aa44779651d by @localai-bot in #4854
  • chore: ⬆️ Update ggml-org/llama.cpp to 63e489c025d61c7ca5ec06c5d10f36e2b76aaa1d by @localai-bot in #4865
  • chore: ⬆️ Update ggml-org/llama.cpp to d04e7163c85a847bc61d58c22f2c503596db7aa8 by @localai-bot in #4870
  • chore: ⬆️ Update ggml-org/llama.cpp to c392e5094deaf2d1a7c18683214f007fad3fe42b by @localai-bot in #4876
  • chore: ⬆️ Update ggml-org/llama.cpp to 51f311e057723b7454d0ebe20f545a1a2c4db6b2 by @localai-bot in #4881
  • chore: ⬆️ Update ggml-org/llama.cpp to a28e0d5eb18c18e6a4598286158f427269b1444e by @localai-bot in #4887
  • chore(stable-diffusion-ggml): update, adapt upstream changes by @mudler in #4889
  • chore: ⬆️ Update ggml-org/llama.cpp to 7ad0779f5de84a68143b2c00ab5dc94a948925d3 by @localai-bot in #4890
  • chore(deps): Bump appleboy/ssh-action from 1.2.0 to 1.2.1 by @dependabot in #4896
  • chore(deps): Bump docs/themes/hugo-theme-relearn from 66bc366 to 02bba0f by @dependabot in #4898
  • chore: ⬆️ Update ggml-org/llama.cpp to 7a2c913e66353362d7f28d612fd3c9d51a831eda by @localai-bot in #4899
  • chore: ⬆️ Update ggml-org/llama.cpp to d7cfe1ffe0f435d0048a6058d529daf76e072d9c by @localai-bot in #4908
  • chore: ⬆️ Update ggml-org/llama.cpp to a800ae46da2ed7dac236aa6bf2b595da6b6294b5 by @localai-bot in #4911
  • chore: ⬆️ Update ggml-org/llama.cpp to b95c8af37ccf169b0a3216b7ed691af0534e5091 by @localai-bot in #4916
  • chore: ⬆️ Update ggml-org/llama.cpp to 06c2b1561d8b882bc018554591f8c35eb04ad30e by @localai-bot in #4920
  • chore: ⬆️ Update ggml-org/llama.cpp to 1782cdfed60952f9ff333fc2ab5245f2be702453 by @localai-bot in #4926
  • chore: ⬆️ Update ggml-org/llama.cpp to 14dec0c2f29ae56917907dbf2eed6b19438d0a0e by @localai-bot in #4932
  • chore(deps): Bump docs/themes/hugo-theme-relearn from 02bba0f to 4a4b60e by @dependabot in #4934
  • chore: ⬆️ Update ggml-org/llama.cpp to dfd6b2c0be191b3abe2fd9c1b25deff01c6249d8 by @localai-bot in #4936
  • chore: ⬆️ Update ggml-org/llama.cpp to 5bbe6a9fe9a8796a9389c85accec89dbc4d91e39 by @localai-bot in #4943
  • chore(deps): update llama.cpp and sync with upstream changes by @mudler in #4950
  • chore: ⬆️ Update ggml-org/llama.cpp to 3d652bfddfba09022525067e672c3c145c074649 by @localai-bot in #4954
  • chore: ⬆️ Update ggml-org/llama.cpp to 7ab364390f92b0b8d83f69821a536b424838f3f8 by @localai-bot in #4959
  • chore: ⬆️ Update ggml-org/llama.cpp to 0fd7ca7a210bd4abc995cd728491043491dbdef7 by @localai-bot in #4963
  • chore: ⬆️ Update ggml-org/llama.cpp to 1e2f78a00450593e2dfa458796fcdd9987300dfc by @localai-bot in #4966
  • chore(deps): Bump intel-extension-for-pytorch from 2.3.110+xpu to 2.6.10+xpu in /backend/python/diffusers by @dependabot in #4973
  • chore(deps): Bump appleboy/ssh-action from 1.2.1 to 1.2.2 by @dependabot in #4978
  • chore(deps): Bump docs/themes/hugo-theme-relearn from 4a4b60e to 9a020e7 by @dependabot in #4988
  • chore: ⬆️ Update ggml-org/llama.cpp to 2c9f833d17bb5b8ea89dec663b072b5420fc5438 by @localai-bot in #4991
  • chore: ⬆️ Update ggml-org/llama.cpp to 10f2e81809bbb69ecfe64fc8b4686285f84b0c07 by @localai-bot in #4996
  • chore: ⬆️ Update ggml-org/llama.cpp to 80a02aa8588ef167d616f76f1781b104c245ace0 by @localai-bot in #5004
  • chore: ⬆️ Update ggml-org/llama.cpp to f08f4b3187b691bb08a8884ed39ebaa94e956707 by @localai-bot in #5006
  • chore: ⬆️ Update ggml-org/llama.cpp to 84d547554123a62e9ac77107cb20e4f6cc503af4 by @localai-bot in #5011
  • chore: ⬆️ Update ggml-org/llama.cpp to 9f2250ba722738ec0e6ab684636268a79160c854 by @localai-bot in #5019
  • chore: ⬆️ Update ggml-org/llama.cpp to f4c3dd5daa3a79f713813cf1aabdc5886071061d by @localai-bot in #5022
  • chore: ⬆️ Update ggml-org/llama.cpp to 8ba95dca2065c0073698afdfcda4c8a8f08bf0d9 by @localai-bot in #5026
  • chore: ⬆️ Update ggml-org/llama.cpp to b1b132efcba216c873715c483809730bb253f4a1 by @localai-bot in #5029
  • chore: ⬆️ Update ggml-org/llama.cpp to d84635b1b085d54d6a21924e6171688d6e3dfb46 by @localai-bot in #5035
  • chore: ⬆️ Update ggml-org/llama.cpp to 568013d0cd3d5add37c376b3d5e959809b711fc7 by @localai-bot in #5042
  • chore: ⬆️ Update ggml-org/llama.cpp to e04643063b3d240b8c0fdba98677dff6ba346784 by @localai-bot in #5047
  • chore: ⬆️ Update ggml-org/llama.cpp to 4375415b4abf94fb36a5fd15f233ac0ee23c0bd1 by @localai-bot in #5052
  • chore: ⬆️ Update ggml-org/llama.cpp to ba932dfb50cc694645b1a148c72f8c06ee080b17 by @localai-bot in #5053
  • chore: ⬆️ Update ggml-org/llama.cpp to 77f9c6bbe55fccd9ea567794024cb80943947901 by @localai-bot in #5062
  • chore: ⬆️ Update ggml-org/llama.cpp to c95fa362b3587d1822558f7e28414521075f254f by @localai-bot in #5068
  • chore: ⬆️ Update ggml-org/llama.cpp to ef19c71769681a0b3dde6bc90911728376e5d236 by @localai-bot in #5073
  • chore: ⬆️ Update ggml-org/llama.cpp to b3298fa47a2d56ae892127ea038942ab1cada190 by @localai-bot in #5077
  • chore: ⬆️ Update ggml-org/llama.cpp to 5dec47dcd411fdf815a3708fd6194e2b13d19006 by @localai-bot in #5079
  • chore: ⬆️ Update ggml-org/llama.cpp to b4ae50810e4304d052e630784c14bde7e79e4132 by @localai-bot in #5085
  • chore: ⬆️ Update ggml-org/llama.cpp to 0bb2919335d00ff0bc79d5015da95c422de51f03 by @localai-bot in #5095
  • chore: ⬆️ Update ggml-org/llama.cpp to 4663bd353c61c1136cd8a97b9908755e4ab30cec by @localai-bot in #5100

Other Changes

  • docs: ⬆️ update docs version mudler/LocalAI by @localai-bot in #4834
  • feat: improve ui models list in the index by @mudler in #4863
  • fix(ui): not all models comes from gallery by @mudler in #4915
  • Revert "chore(deps): Bump intel-extension-for-pytorch from 2.3.110+xpu to 2.6.10+xpu in /backend/python/diffusers" by @mudler in #4992
  • chore(deps): Bump grpcio to 1.71.0 by @mudler in #4993
  • fix: ensure git-lfs is present by @dave-gray101 in #5078

New Contributors

Full Changelog: v2.26.0...v2.27.0

Don't miss a new LocalAI release

NewReleases is sending notifications on new releases.