Changes
- Nicer one-liner for the install
cmdh- built-in
override.env - Better auto-config based on the running LLM backend (llamacpp added)
- configure OpenAI api key/url via CLI
harbor cmdh url/key - Fixed
"service" has no docker image..."error when running with multiple LLM backends
- built-in
hf- Using custom docker image for a more recent CLI version
- Integrated help between Harbor extensions and native CLI
llamacpp- Configuring model via a path to
.ggufin the cache - Configure cache location
harbor llamacpp cache ~/path/to/cache
- Configuring model via a path to
harbor cmd--hflag prints compose files in a nicer way for debugharbor find- looks up files in caches connected with Harbor (HF, vLLM, llama.cpp, ollama)harbor find *.gguf,harbor find Hermes
litellm- Now uses OpenAI-compatible API for
tgi, as native only works with non-chat completions
- Now uses OpenAI-compatible API for
Full Changelog: v0.1.0...v0.1.1