3.0.0-beta.16 (2024-04-13)
Bug Fixes
Features
inspect gpucommand: print device names (#198) (5ca33c7)inspect gpucommand: print env info (#202) (d332b77)- download models using the CLI (#191) (b542b53)
- interactively select a model from CLI commands (#191) (b542b53)
- change the default log level to warn (#191) (b542b53)
- token biases (#196) (3ad4494)
Shipped with llama.cpp release b2665
To use the latest
llama.cpprelease available, runnpx --no node-llama-cpp download --release latest. (learn more)