3.0.0-beta.14 (2024-03-16)
Bug Fixes
DisposedErrorwas thrown when calling.dispose()(#178) (315a3eb)- adapt to breaking
llama.cppchanges (#178) (315a3eb)
Features
- async model and context loading (#178) (315a3eb)
- automatically try to resolve
Failed to detect a default CUDA architectureCUDA compilation error (#178) (315a3eb) - detect
cmakebinary issues and suggest fixes on detection (#178) (315a3eb)
Shipped with llama.cpp release b2440
To use the latest
llama.cpprelease available, runnpx --no node-llama-cpp download --release latest. (learn more)