github allenai/allennlp v0.9.0

latest releases: v2.10.1, v2.10.0, v2.9.3...
5 years ago

Main features

  • AllenNLP Interpret. This lets you interpret the predictions of any AllenNLP model, using gradient-based visualization and attack techniques. You can (1) explore existing interpretations for models that we have implemented at demo.allennlp.org; (2) easily add interpretations for your own model, either programmatically or in a live demo; and (3) easily add new interpretation methods that can be used with any AllenNLP model.
  • Compatibility with pytorch-transformers, so you can use RoBERTa or whatever else as your base encoder.

Also of note

  • A new, more flexible seq2seq abstraction is available (though, honestly, I think we all agree that fairseq or OpenNMT are better for seq2seq models still).
  • When specifying types for registrable items, you can now use a fully-qualified path, like "my_package.models.my_new_fancy_classifier", instead of needing to pass --include-package everywhere.

Complete commit list

052353e (tag: v0.9.0) bump version number to v0.9.0
ff0d44a (origin/master, origin/HEAD) reversing NER for interpet UI (#3283)
3b22011 Composed Sequence to Sequence Abstraction (#2913)
b85f29c Fix F1Measure returning true positives, false positives, et al. only for the first class (#3279)
64143c4 upgrade to latest pylint (#3266)
d09042e Fix crash when hotflip gets OOV input (#3277)
2a95022 Revert batching for input reduction (#3276)
052e8d3 Reduce number of samples in smoothgrad (#3273)
76d248f Reduce hotflip vocab size, batch input reduction beam search (#3270)
9a67546 fix empty sequence bug (#3271)
87fb294 Update question.md (#3267)
daed835 Fix wrong partition to types in DROP evaluation (#3263)
41a4776 Unidirectional LM doesn't return backward loss. (#3256)
3e0bad4 Minor fixes for interpret code (#3260)
05be16a allow implicit package imports (#3253)
48de866 Assorted fixes for run_with_beaker.py (#3248)
c732cbf Add additive attention & unittest (#3238)
07364c6 Make Instance in charge of when to re-index (#3239)
7b50b69 Replace staticmethods with classmethods (#3229)
7cfaab4 Add ERROR callback event (#2983)
ce50407 Revert "Use an NVIDIA base image. (#3177)" (#3222)
b1caa9e Use an NVIDIA base image. (#3177)
4625a9d Improve check_links.py CI script (#3141)
5e2206d Add a reference to Joe Barrow's blog
27ebcf6 added infer_type_and_cast flags (#3209)
bbaf1fc Benchmark iterator, avoid redundant queue, remove managers. (#3119)
78ee3d8 Targeted hotflip attacks and beam search for input reduction (#3206)
f2824fd Predictors for demo LMs, update for coref predictor (#3202)
d78ac70 Language model classes for making predictions (both masked LM and next token LM) (#3201)
8c06c4b Adding a LanguageModelHead abstraction (#3200)
370d512 Dataset readers for masked language modeling and next-token-language-modeling (#3147)
1eaa1ff Link to Discourse in README
030e28c Revert "Revert "Merge branch 'matt-gardner-transformer-embedder'""
6e1e371 Revert "Merge branch 'matt-gardner-transformer-embedder'"
4c7fa73 Merge branch 'matt-gardner-transformer-embedder'
07bdc4a Merge branch 'transformer-embedder' of https://github.com/matt-gardner/allennlp into matt-gardner-transformer-embedder
993034f Minor fixes so PretrainedTransformerIndexer works with roberta (#3203)
70e92e8 doc
ed93e52 pylint
195bf0c override method
6ec74aa Added a TokenEmbedder for use with pytorch-transformers
fb9a971 code for mixed bert embedding layers (#3199)
0e872a0 Clarify that scalar_mix_parameters takes unnormalized weights (#3198)
23efadd upgrade to pytorch 1.2 (#3182)
155a94e Add DropEmAndF1 metric to init.py (#3191)
7738cb5 Add exist_ok parameter to registrable.register decorator. (#3190)
ce6dc72 Add example of initializing weights from pretrained model to doc (#3188)
817814b Update documentation for bert_pooler.py (#3181)
112d8d0 Bump version numbers to v0.9.0-unreleased

Don't miss a new allennlp release

NewReleases is sending notifications on new releases.