github huggingface/tokenizers python-v0.12.0
[YANKED] Python v0.12.0

latest releases: v0.20.3, v0.20.3rc1, v0.20.2...
2 years ago

[0.12.0]

The breaking change was causing more issues upstream in transformers than anticipated:
huggingface/transformers#16537 (comment)

The decision was to rollback on that breaking change, and figure out a different way later to do this modification

Bump minor version because of a breaking change.

  • [#938] Breaking change. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.

  • [#939] Making the regex in ByteLevel pre_tokenizer optional (necessary for BigScience)

  • [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)

  • [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.

  • [#962] Fix tests for python 3.10

  • [#961] Added link for Ruby port of tokenizers

Don't miss a new tokenizers release

NewReleases is sending notifications on new releases.