github ggml-org/llama.cpp b8498

latest releases: b8502, b8501, b8500...
3 hours ago
Details

common : add standard Hugging Face cache support (#20775)

  • common : add standard Hugging Face cache support
  • Use HF API to find all files
  • Migrate all manifests to hugging face cache at startup

Signed-off-by: Adrien Gallouët angt@huggingface.co

  • Check with the quant tag

Signed-off-by: Adrien Gallouët angt@huggingface.co

  • Cleanup

Signed-off-by: Adrien Gallouët angt@huggingface.co

  • Improve error handling and report API errors

Signed-off-by: Adrien Gallouët angt@huggingface.co

  • Restore common_cached_model_info and align mmproj filtering

Signed-off-by: Adrien Gallouët angt@huggingface.co

  • Prefer main when getting cached ref

Signed-off-by: Adrien Gallouët angt@huggingface.co

  • Use cached files when HF API fails

Signed-off-by: Adrien Gallouët angt@huggingface.co

  • Use final_path..

Signed-off-by: Adrien Gallouët angt@huggingface.co

  • Check all inputs

Signed-off-by: Adrien Gallouët angt@huggingface.co


Signed-off-by: Adrien Gallouët angt@huggingface.co

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.