Name change: welcome 🤗 Transformers
Following the extension to TensorFlow 2.0, pytorch-transformers
=> transformers
Install with pip install transformers
Also, note that PyTorch is no longer in the requirements so don't forget to install TensorFlow 2.0 and/or PyTorch to be able to use (and load) the models.
TensorFlow 2.0 - PyTorch
All the PyTorch nn.Module
classes now have their counterpart in TensorFlow 2.0 as tf.keras.Model
classes. TensorFlow 2.0 classes have the same name as their PyTorch counterparts prefixed with TF
.
The interoperability between TensorFlow and PyTorch is actually a lot deeper than what is usually meant when talking about libraries with multiple backends:
- each model (not just the static computation graph) can be seamlessly moved from one framework to the other during the lifetime of the model for training/evaluation/usage (
from_pretrained
can load weights saved from models saved in one or the other framework), - an example is given in the quick-tour on TF 2.0 and PyTorch in the readme in which a model is trained using keras.fit before being opened in PyTorch for quick debugging/inspection.
Remaining unsupported operations in TF 2.0 (to be added later):
- resizing input embeddings to add new tokens
- pruning model heads
TPU support
Training on TPU using free TPUs provided in the TensorFlow Research Cloud (TFRC) program is possible but requires to implement a custom training loop (not possible with keras.fit at the moment).
We will add an example of such a custom training loop soon.
Improved tokenizers
Tokenizers have been improved to provide extended encoding methods encoding_plus
and additional arguments to encoding
. Please refer to the doc for detailed usage of the new options.
Breaking changes
Positional order of some model keywords inputs changed (better TorchScript support)
To be able to better use Torchscript both on CPU and GPUs (see #1010, #1204 and #1195) the specific order of some models keywords inputs (attention_mask
, token_type_ids
...) has been changed.
If you used to call the models with keyword names for keyword arguments, e.g. model(inputs_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
, this should not cause any breaking change.
If you used to call the models with positional inputs for keyword arguments, e.g. model(inputs_ids, attention_mask, token_type_ids)
, you should double-check the exact order of input arguments.
Dependency requirements have changed
PyTorch is no longer in the requirements so don't forget to install TensorFlow 2.0 and/or PyTorch to be able to use (and load) the models.
Renamed method
The method add_special_tokens_sentence_pair
has been renamed to the more appropriate name add_special_tokens_sequence_pair
.
The same holds true for the method add_special_tokens_single_sentence
which has been changed to add_special_tokens_single_sequence
.
Community additions/bug-fixes/improvements
- new German model (@Timoeller)
- new script for MultipleChoice training (SWAG, RocStories...) (@erenup)
- better fp16 support (@ziliwang and @bryant1410)
- fix evaluation in run_lm_finetuning (@SKRohit)
- fiw LM finetuning to prevent crashing on assert len(tokens_b)>=1 (@searchivarius)
- Various doc and docstring fixes (@sshleifer, @Maxpa1n, @mattolson93, @T080)