What's Changed
TabbyAPI backend support
harbor tabbyapi model Annuvin/gemma-2-2b-it-abliterated-4.0bpw-exl2
harbor up tabbyapiNew CLI Features
harbor hf dl
Integrating awesome HuggingFaceModelDownloader CLI for easier HF/Llama.cpp cache management
# See the original help
harbor hf dl --help
# EXL2 example
#
# -s ./hf - Save the model to global HuggingFace cache (mounted to ./hf)
# -c 10 - make download go brr with 10 concurrent connections
# -m - model specifier in user/repo format
# -b - model revision/branch specifier (where applicable)
harbor hf dl -c 10 -m turboderp/TinyLlama-1B-exl2 -b 2.3bpw -s ./hf
# GGUF example
#
# -s ./llama.cpp - Save the model to global llama.cpp cache (mounted to ./llama.cpp)
# -c 10 - make download go brr with 10 concurrent connections
# -m - model specifier in user/repo format
# :Q2_K - file filter postfix - will only download files with this postfix
harbor hf dl -c 10 -m TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF:Q2_K -s ./llama.cppharbor hf find
To accompany the hf dl - a quick way to jump right to the HF Hub to find new models.
harbor hf find gguf
harbor hf find exl2 gemma-2
harbor hf find awq llama-3.1
harbor hf find tinyllamaMisc
- docs: update README.md by @eltociear in #3
harbor shell- launch interactive shell in service container (shortcut from previousharbor exec+harbor cmdcombinations)harbor build- for services that'll have theirDockerfilewithin Harbor repo (such ashfdownloader)
New Contributors
- @eltociear made their first contribution in #3
Full Changelog: v0.0.12...v0.0.13