Changed
- Only one progress bar while reading files during training. This is better for use-cases with
a high number of files as it avoids having too many progress bars on screen. Also avoids reading the
size of each file before starting to actually read these files, as this process could take really
long. - [#193]:
encode
andencode_batch
now take a new optional argument, specifying whether we
should add the special tokens. This is activated by default. - [#197]:
original_str
andnormalized_str
have been removed from theEncoding
returned by
encode
andencode_batch
. This brings a reduction of 70% of the memory footprint. - [#197]: The offsets provided on
Encoding
are now relative to the original string, and not the
normalized one anymore. - The added token given to
add_special_tokens
oradd_tokens
on aTokenizer
, or while using
train(special_tokens=...)
can now be instances ofAddedToken
to provide more control over these
tokens. - [#136]: Updated Pyo3 version
- [#136]: Static methods
Model.from_files
andModel.empty
are removed in favor of using
constructors. - [#239]:
CharBPETokenizer
now corresponds to OpenAI GPT BPE implementation by default.
Added
- [#188]:
ByteLevel
is also aPostProcessor
now and handles trimming the offsets if activated.
This avoids the unintuitive inclusion of the whitespaces in the produced offsets, even if these
whitespaces are part of the actual token.
It has been added toByteLevelBPETokenizer
but it is off by default (trim_offsets=False
). - [#236]:
RobertaProcessing
also handles trimming the offsets. - [#234]: New alignment mappings on the
Encoding
. Provide methods to easily convert betweenchar
orword
(input space) andtoken
(output space). post_process
can be called on theTokenizer
- [#208]: Ability to retrieve the vocabulary from the
Tokenizer
with
get_vocab(with_added_tokens: bool)
- [#136] Models can now be instantiated through object constructors.
Fixed
- [#193]: Fix some issues with the offsets being wrong with the
ByteLevel
BPE:- when
add_prefix_space=True
- [#156]: when a Unicode character gets split-up in multiple byte-level characters
- when
- Fix a bug where offsets were wrong when there was any added tokens in the sequence being encoded.
- [#175]: Fix a bug that prevented the addition of more than a certain amount of tokens (even if
not advised, but that's not the question). - [#205]: Trim the decoded string in
BPEDecoder
used byCharBPETokenizer
How to migrate
- Add the
ByteLevel
PostProcessor
to your byte-level BPE tokenizers if relevant. If you are
usingByteLevelBPETokenizer
, this option is disabled by default (trim_offsets=False
). BertWordPieceTokenizer
option toadd_special_tokens
must now be given toencode
or
encode_batch
- Access to the
original_str
on theEncoding
has been removed. The original string is the input
ofencode
so it didn't make sense to keep it here. - No need to call
original_str.offsets(offsets[N])
to convert offsets to the original string. They
are now relative to the original string by default. - Access to the
normalized_str
on theEncoding
has been removed. Can be retrieved by calling
normalize(sequence)
on theTokenizer
- Change
Model.from_files
andModel.empty
to use constructor. The model constructor should take
the same arguments as the old methods. (ieBPE(vocab, merges)
orBPE()
) - If you were using the
CharBPETokenizer
and want to keep the same behavior as before, set
bert_normalizer=False
andsplit_on_whitespace_only=True
.