github allenai/allennlp v1.0.0rc5

latest releases: v2.10.1, v2.10.0, v2.9.3...
3 years ago

Fixed

  • Fix bug where PretrainedTransformerTokenizer crashed with some transformers (#4267)
  • Make cached_path work offline.
  • Tons of docstring inconsistencies resolved.
  • Nightly builds no longer run on forks.
  • Distributed training now automatically figures out which worker should see which instances
  • A race condition bug in distributed training caused from saving the vocab to file from the master process while other processing might be reading those files.
  • Unused dependencies in setup.py removed

Added

  • Additional CI checks to ensure docstrings are consistently formatted.
  • Ability to train on CPU with multiple processes by setting cuda_devices to a list of negative integers in your training config. For example: "distributed": {"cuda_devices": [-1, -1]}. This is mainly to make it easier to test and debug distributed training code.
  • Documentation for when parameters don't need config file entries

Changed

  • The allennlp test-install command now just ensures the core submodules can
    be imported successfully, and prints out some other useful information such as the version, PyTorch version, and the number of GPU devices available.
  • All of the tests moved from allennlp/tests to tests at the root level, and
    allennlp/tests/fixtures moved to test_fixtures at the root level. The PyPI source and wheel distributions will no longer include tests and fixtures.

Commits

7dcc60b Update version for release v1.0.0rc5
f421e91 clean up dependencies (#4290)
a9be961 Bump mkdocs-material from 5.2.0 to 5.2.2 (#4288)
69fc5b4 Update saliency_interpreter.py (#4286)
e52fea2 Makes the EpochCallback work the same way as the BatchCallback (#4277)
6574823 Make special token inference logic more robust (#4267)
24617c0 Bump overrides from 2.8.0 to 3.0.0 (#4249)
f7d9673 Bump mkdocs-material from 5.1.6 to 5.2.0 (#4257)
5198a5c Document when parameters do not need an entry in a config file (#4275)
4ee2735 update contribution guidelines (#4271)
dacbb75 wait for non-master workers to finish reading vocab before master worker saves it (#4274)
f27475a Enable multi-process training on CPU (#4272)
7e683dd Workers in the distributed scenario need to see different instances (#4241)
9c51d6c move test and fixtures to root level and simplify test-install command (#4264)
65a146d Clean up the command to create a commit list. (#4263)
88683d4 switch to tokenless codecov upload (#4261)
b41d448 Add a CHANGELOG (#4260)
7d71398 make 'cached_path' work offline (#4253)
fc81067 move py2md back to scripts (#4251)
4de68a4 Improves API docs and docstring consistency (#4244)
1b0d231 tick version to rc5

Don't miss a new allennlp release

NewReleases is sending notifications on new releases.