github mudler/LocalAI v1.40.0

latest releases: v2.23.0, v2.22.1, v2.22.0...
12 months ago

This release is a preparation before v2 - the efforts now will be to refactor, polish and add new backends. Follow up on: #1126

Hot topics

This release now brings the llama-cpp backend which is a c++ backend tied to llama.cpp. It follows more closely and tracks recent versions of llama.cpp. It is not feature compatible with the current llama backend but plans are to sunset the current llama backend in favor of this one. This one will be probably be the latest release containing the older llama backend written in go and c++. The major improvement with this change is that there are less layers that could be expose to potential bugs - and as well it ease out maintenance as well.

Support for ROCm/HIPBLAS

This release bring support for AMD thanks to @65a . See more details in #1100

More CLI commands

Thanks to @jespino now the local-ai binary has more subcommands allowing to manage the gallery or try out directly inferencing, check it out!

What's Changed

Bug fixes 🐛

  • fix(openai): Populate ID and Created fields in OpenAI compatible responses by @jespino in #1164
  • Fix backend/cpp/llama CMakeList.txt on OSX by @dave-gray101 in #1212

Exciting New Features 🎉

👒 Dependencies

  • fix(deps): update module github.com/onsi/gomega to v1.28.0 by @renovate in #1113
  • ⬆️ Update go-skynet/go-llama.cpp by @localai-bot in #1106
  • fix(deps): update github.com/tmc/langchaingo digest to e16b777 by @renovate in #1101
  • fix(deps): update github.com/go-skynet/go-llama.cpp digest to 79f9587 by @renovate in #1085
  • fix(deps): update module github.com/shirou/gopsutil/v3 to v3.23.9 by @renovate in #1120
  • fix(deps): update module github.com/sashabaranov/go-openai to v1.15.4 by @renovate in #1122
  • fix(deps): update module github.com/rs/zerolog to v1.31.0 by @renovate in #1102
  • ⬆️ Update go-skynet/go-llama.cpp by @localai-bot in #1130
  • fix(deps): update github.com/go-skynet/go-llama.cpp digest to 6018c9d by @renovate in #1129
  • ⬆️ Update go-skynet/go-llama.cpp by @localai-bot in #1136
  • fix(deps): update github.com/go-skynet/go-llama.cpp digest to 1676dcd by @renovate in #1135
  • fix(deps): update github.com/nomic-ai/gpt4all/gpt4all-bindings/golang digest to 56c0d28 by @renovate in #1140
  • fix(deps): update module github.com/onsi/ginkgo/v2 to v2.13.0 by @renovate in #1152
  • fix(deps): update module google.golang.org/grpc to v1.58.3 by @renovate in #1160
  • fix(deps): update github.com/go-skynet/go-llama.cpp digest to aeba71e by @renovate in #1155
  • fix(deps): update github.com/nomic-ai/gpt4all/gpt4all-bindings/golang digest to 10f9b49 by @renovate in #1158
  • fix(deps): update module github.com/sashabaranov/go-openai to v1.16.0 by @renovate in #1159
  • fix(deps): update github.com/nomic-ai/gpt4all/gpt4all-bindings/golang digest to 22de3c5 by @renovate in #1172
  • fix(deps): update github.com/tmc/langchaingo digest to a02d4fd by @renovate in #1175
  • fix(deps): update module github.com/gofiber/fiber/v2 to v2.50.0 by @renovate in #1177
  • fix(deps): update github.com/nomic-ai/gpt4all/gpt4all-bindings/golang digest to 9a19c74 by @renovate in #1179
  • fix(deps): update github.com/tmc/langchaingo digest to c636b3d by @renovate in #1188
  • fix(deps): update module google.golang.org/grpc to v1.59.0 by @renovate in #1189
  • chore(deps): update actions/checkout action to v4 by @renovate in #1006
  • feat(llama.cpp): update by @mudler in #1200
  • ⬆️ Update go-skynet/go-llama.cpp by @localai-bot in #1156
  • fix(deps): update github.com/nomic-ai/gpt4all/gpt4all-bindings/golang digest to c25dc51 by @renovate in #1191
  • ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1204
  • fix(deps): update module github.com/onsi/gomega to v1.28.1 by @renovate in #1205

Other Changes

  • fix(deps): update github.com/nomic-ai/gpt4all/gpt4all-bindings/golang digest to 6711bdd by @renovate in #1079
  • ci: cleanup worker by @mudler in #1166
  • docs(examples): Add mistral example by @mudler in #1214
  • feat(llama.cpp): Bump llama.cpp, adapt grpc server by @mudler in #1211
  • cleanup: drop bloomz and ggllm as now supported by llama.cpp by @mudler in #1217
  • ci: use self-hosted to build container images by @mudler in #1206
  • ci: run only cublas on selfhosted by @mudler in #1224
  • ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1207

New Contributors

Full Changelog: v1.30.0...v1.40.0

Don't miss a new LocalAI release

NewReleases is sending notifications on new releases.