3.3.0 (2024-12-02)
Bug Fixes
- improve binary compatibility testing on Electron apps (#386) (97abbca)
- too many abort signal listeners (#386) (97abbca)
- log level of some lower level logs (#386) (97abbca)
- context window missing response during generation on specific extreme conditions (#386) (97abbca)
- adapt to breaking
llama.cppchanges (#386) (97abbca) - automatically resolve
compiler is out of heap spaceCUDA build error (#386) (97abbca)
Features
- Llama 3.2 3B function calling support (#386) (97abbca)
- use
llama.cppbackend registry for GPUs instead of custom implementations (#386) (97abbca) getLlama:build: "try"option (#386) (97abbca)initcommand:--modelflag (#386) (97abbca)- JSON Schema grammar: array
prefixItems,minItems,maxItemssupport (#388) (4d387de) - JSON Schema grammar: object
additionalProperties,minProperties,maxPropertiessupport (#388) (4d387de) - JSON Schema grammar: string
minLength,maxLength,formatsupport (#388) (4d387de) - JSON Schema grammar: improve inferred types (#388) (4d387de)
- function calling: params
descriptionsupport (#388) (4d387de) - function calling: document JSON Schema type properties on Functionary chat function types (#388) (4d387de)
Shipped with llama.cpp release b4234
To use the latest
llama.cpprelease available, runnpx -n node-llama-cpp source download --release latest. (learn more)