What's Changed
- Now we match non-greedy sentence piece tokenization in llama.cpp.
- Move to numpy and spicy from PyTorch for basic scoring needs.
- Update LLama.cpp to use their newest API by @paulbkoch in #455
- Use len(tkz) instead of tkz.vocab_size to estimate vocabulary size by @EgorBu in #460
- Fix test warnings by @riedgar-ms in #461
New Contributors
- @EgorBu made their first contribution in #460
- @riedgar-ms made their first contribution in #461
Full Changelog: 0.1.2...0.1.3