github allenai/allennlp v1.0.0rc6

latest releases: v2.10.1, v2.10.0, v2.9.3...
pre-release3 years ago

Fixed

  • A bug where TextFields could not be duplicated since some tokenizers cannot be deep-copied.
    See #4270.
  • Our caching mechanism had the potential to introduce race conditions if multiple processes
    were attempting to cache the same file at once. This was fixed by using a lock file tied to each
    cached file.
  • get_text_field_mask() now supports padding indices that are not 0.
  • A bug where predictor.get_gradients() would return an empty dictionary if an embedding layer had trainable set to false
  • Fixes PretrainedTransformerMismatchedIndexer in the case where a token consists of zero word pieces.
  • Fixes a bug when using a lazy dataset reader that results in a UserWarning from PyTorch being printed at
    every iteration during training.
  • Predictor names were inconsistently switching between dashes and underscores. Now they all use underscores.
  • Predictor.from_path now automatically loads plugins (unless you specify load_plugins=False) so
    that you don't have to manually import a bunch of modules when instantiating predictors from
    an archive path.
  • allennlp-server automatically found as a plugin once again.

Added

  • A duplicate() method on Instances and Fields, to be used instead of copy.deepcopy()
  • A batch sampler that makes sure each batch contains approximately the same number of tokens (MaxTokensBatchSampler)
  • Functions to turn a sequence of token indices back into tokens
  • The ability to use Huggingface encoder/decoder models as token embedders
  • Improvements to beam search
  • ROUGE metric
  • Polynomial decay learning rate scheduler
  • A BatchCallback for logging CPU and GPU memory usage to tensorboard. This is mainly for debugging
    because using it can cause a significant slowdown in training.
  • Ability to run pretrained transformers as an embedder without training the weights

Changed

  • Similar to our caching mechanism, we introduced a lock file to the vocab to avoid race
    conditions when saving/loading the vocab from/to the same serialization directory in different processes.
  • Changed the Token, Instance, and Batch classes along with all Field classes to "slots" classes. This dramatically reduces the size in memory of instances.
  • SimpleTagger will no longer calculate span-based F1 metric when calculate_span_f1 is False.
  • CPU memory for every worker is now reported in the logs and the metrics. Previously this was only reporting the CPU memory of the master process, and so it was only
    correct in the non-distributed setting.
  • To be consistent with PyTorch IterableDataset, AllennlpLazyDataset no longer implements __len__().
    Previously it would always return 1.
  • Removed old tutorials, in favor of the new AllenNLP Guide
  • Changed the vocabulary loading to consider new lines for Windows/Linux and Mac.

Commits

d98d13b add 'allennlp_server' to default plugins (#4348)
33d0cd8 fix file utils test (#4349)
f4d330a Update vocabulary load to a system-agnostic newline (#4342)
2012fea remove links to tutorials in API docs (#4346)
3d8ce44 Fixes spelling in changelog
73289bc Consistently use underscores in Predictor names (#4340)
2d03c41 Allow using pretrained transformers without fine-tuning them (#4338)
8f68d69 load plugins from Predictor.from_path (#4333)
5c6cc3a Bump mkdocs-material from 5.2.2 to 5.2.3 (#4341)
7ab7551 Removing old tutorials, pointing to the new guide in the README (#4334)
902d36a Fix bug with lazy data loading, un-implement len on AllennlpLazyDataset (#4328)
11b5799 log metrics in alphabetical order (#4327)
7d66b3e report CPU memory usage for each worker (#4323)
06bac68 make Instance, Batch, and all field classes "slots" classes (#4313)
2b2d141 Bump mypy from 0.770 to 0.780 (#4316)
a038c01 Update transformers requirement from <2.11,>=2.9 to >=2.9,<2.12 (#4315)
345459e Stop calculating span-based F1 metric when calculate_span_f1 is False. (#4302)
fc47bf6 Deals with the case where a word doesn't have any word pieces assigned (#4301)
11a08ae Making Token class a "slots" class (#4312)
32bccfb Fix a bug where predictor.get_gradients() would return an empty... (#4305)
33a4945 ensure CUDA available in GPU checks workflow (#4310)
d51ffa1 Update transformers requirement from <2.10,>=2.9 to >=2.9,<2.11 (#4282)
75c07ab Merge branch 'master' of github.com:allenai/allennlp
8c9421d fix Makefile
77b432f Update README.md (#4309)
720ad43 A few small fixes in the README.md (#4307)
a7265c0 move tensorboard memory logging to BatchCallback (#4306)
91d0fa1 remove setup.cfg (#4300)
5ad7a33 Support for bart in allennlp-models (#4169)
25134f2 add lock file within caching and vocab saving/loading mechanisms (#4299)
58dc84e add 'Feature request' label to template
9526f00 Update issue templates (#4293)
79999ec Adds a "duplicate()" method on instances and fields (#4294)
8ff47d3 Set version to rc6

Don't miss a new allennlp release

NewReleases is sending notifications on new releases.