github TabbyML/tabby v0.5.0-rc.0

latest releases: v0.31.1, v0.31.1-rc.0, nightly...
pre-release22 months ago

⚠️ Notice

  • llama.cpp backend (CPU, Metal) now requires a redownload of gguf model due to upstream format changes: #645 ggml-org/llama.cpp#3252
  • Due to indexing format changes, the ~/.tabby/index needs to be manually removed before any further runs of tabby scheduler.
  • TABBY_REGISTRY is replaced with TABBY_DOWNLOAD_HOST for the github based registry implementation.

🚀 Features

  • Improved dashboard UI.
image

🧰 Fixes and Improvements

  • Cpu backend is switched to llama.cpp: #638
  • add server.completion_timeout to control the code completion interface timeout: #637
  • Cuda backend is switched to llama.cpp: #656
  • Tokenizer implementation is switched to llama.cpp, so tabby no longer need to download additional tokenizer file: #683

💫 New Contributors

Full Changelog: v0.4.0...v0.5.0

Don't miss a new tabby release

NewReleases is sending notifications on new releases.