github allenai/allennlp v2.6.0

latest releases: v2.10.1, v2.10.0, v2.9.3...
3 years ago

What's new

Added 🎉

  • Added on_backward training callback which allows for control over backpropagation and gradient manipulation.
  • Added AdversarialBiasMitigator, a Model wrapper to adversarially mitigate biases in predictions produced by a pretrained model for a downstream task.
  • Added which_loss parameter to ensure_model_can_train_save_and_load in ModelTestCase to specify which loss to test.
  • Added **kwargs to Predictor.from_path(). These key-word argument will be passed on to the Predictor's constructor.
  • The activation layer in the transformer toolkit now can be queried for its output dimension.
  • TransformerEmbeddings now takes, but ignores, a parameter for the attention mask. This is needed for compatibility with some other modules that get called the same way and use the mask.
  • TransformerPooler can now be instantiated from a pretrained transformer module, just like the other modules in the transformer toolkit.
  • TransformerTextField, for cases where you don't care about AllenNLP's advanced text handling capabilities.
  • Added TransformerModule._post_load_pretrained_state_dict_hook() method. Can be used to modify missing_keys and unexpected_keys after loading a pretrained state dictionary. This is useful when tying weights, for example.
  • Added an end-to-end test for the Transformer Toolkit.
  • Added vocab argument to BeamSearch, which is passed to each contraint in constraints (if provided).

Fixed ✅

  • Fixed missing device mapping in the allennlp.modules.conditional_random_field.py file.
  • Fixed Broken link in allennlp.fairness.fairness_metrics.Separation docs
  • Ensured all allennlp submodules are imported with allennlp.common.plugins.import_plugins().
  • Fixed IndexOutOfBoundsException in MultiOptimizer when checking if optimizer received any parameters.
  • Removed confusing zero mask from VilBERT.
  • Ensured ensure_model_can_train_save_and_load is consistently random.
  • Fixed weight tying logic in T5 transformer module. Previously input/output embeddings were always tied. Now this is optional,
    and the default behavior is taken from the config.tie_word_embeddings value when instantiating from_pretrained_module().
  • Implemented slightly faster label smoothing.
  • Fixed the docs for PytorchTransformerWrapper
  • Fixed recovering training jobs with models that expect get_metrics() to not be called until they have seen at least one batch.
  • Made the Transformer Toolkit compatible with transformers that don't start their positional embeddings at 0.
  • Weights & Biases training callback ("wandb") now works when resuming training jobs.

Changed ⚠️

  • Changed behavior of MultiOptimizer so that while a default optimizer is still required, an error is not thrown if the default optimizer receives no parameters.
  • Made the epsilon parameter for the layer normalization in token embeddings configurable.

Removed 👋

  • Removed TransformerModule._tied_weights. Weights should now just be tied directly in the __init__() method. You can also override TransformerModule._post_load_pretrained_state_dict_hook() to remove keys associated with tied weights from missing_keys after loading a pretrained state dictionary.

Commits

ef5400d make W&B callback resumable (#5312)
9629340 Update google-cloud-storage requirement (#5309)
f8fad9f Provide vocab as param to constraints (#5321)
56e1f49 Fix training Conditional Random Fields on GPU (#5313) (#5315)
3c1ac03 Update wandb requirement from <0.11.0,>=0.10.0 to >=0.10.0,<0.12.0 (#5316)
7d4a672 Transformer Toolkit fixes (#5303)
aaa816f Faster label smoothing (#5294)
436c52d Docs update for PytorchTransformerWrapper (#5295)
3d92ac4 Update google-cloud-storage requirement (#5296)
5378533 Fixes recovering when the model expects metrics to be ready (#5293)
7428155 ensure torch always up-to-date in CI (#5286)
3f307ee Update README.md (#5288)
672485f only run CHANGELOG check when source files are modified (#5287)
c6865d7 use smaller snapshot for HFHub integration test
ad54d48 Bump mypy from 0.812 to 0.910 (#5283)
42d96df typo: missing "if" in drop_last doc (#5284)
a246e27 TransformerTextField (#5280)
82053a9 Improve weight tying logic in transformer module (#5282)
c936da9 Update transformers requirement from <4.8,>=4.1 to >=4.1,<4.9 (#5281)
e8f816d Update google-cloud-storage requirement (#5277)
86504e6 Making model test case consistently random (#5278)
5a7844b add kwargs to Predictor.from_path() (#5275)
8ad562e Update transformers requirement from <4.7,>=4.1 to >=4.1,<4.8 (#5273)
c8b8ed3 Transformer toolkit updates (#5270)
6af9069 update Python environment setup in GitHub Actions (#5272)
f1f51fc Adversarial bias mitigation (#5269)
af101d6 Removes confusing zero mask from VilBERT (#5264)
a1d36e6 Update torchvision requirement from <0.10.0,>=0.8.1 to >=0.8.1,<0.11.0 (#5266)
e5468d9 Bump black from 21.5b2 to 21.6b0 (#5255)
b37686f Update torch requirement from <1.9.0,>=1.6.0 to >=1.6.0,<1.10.0 (#5267)
5da5b5b Upload code coverage reports from different jobs, other CI improvements (#5257)
a6cfb12 added on_backward trainer callback (#5249)
8db45e8 Ensure all relevant allennlp submodules are imported with import_plugins() (#5246)
57df0e3 [Docs] Fixes broken link in Fairness_Metrics (#5245)
154f75d Bump black from 21.5b1 to 21.5b2 (#5236)
7a5106d tick version for nightly release

Don't miss a new allennlp release

NewReleases is sending notifications on new releases.