pypi spacy 2.0.0
v2.0.0: Neural networks, 13 new models for 7+ languages, better training, custom pipelines, Pickle & lots of API improvements

latest releases: 4.0.0.dev3, 3.7.4, 3.7.3...
6 years ago

We're very excited to finally introduce spaCy v2.0. The new version gets spaCy up to date with the latest deep learning technologies and makes it much easier to run spaCy in scalable cloud computing workflows. We've fixed over 60 bugs (every open bug!), including several long-standing issues, trained 13 neural network models for 7+ languages and added alpha tokenization support for 8 new languages. We also re-wrote almost all of the usage guides, API docs and code examples.

pip install -U spacy
conda install -c conda-forge spacy

✨ Major features and improvements

  • NEW: Convolutional neural network models for English, German, Spanish, Portuguese, French, Italian, Dutch and multi-language NER. Substantial improvements in accuracy over the v1.x models.
  • NEW: Vectors class for managing word vectors, plus trainable document vectors and contextual similarity via convolutional neural networks.
  • NEW: Custom processing pipeline components and extension attributes on the Doc, Token and Span via Doc._, Token._ and Span._.
  • NEW: Built-in, trainable text classification pipeline component.
  • NEW: Built-in displaCy visualizers for dependencies and entities, with Jupyter notebook support.
  • NEW: Alpha tokenization for Danish, Polish, Indonesian, Thai, Hindi, Irish, Turkish, Croatian and Romanian.
  • Improved language data, support for lazy loading and simple, lookup-based lemmatization for English, German, French, Spanish, Italian, Hungarian, Portuguese and Swedish.
  • Support for multi-language models and new MultiLanguage class (xx).
  • Strings are now resolved to hash values, instead of mapped to integer IDs. This means that the string-to-int mapping no longer depends on the vocabulary state.
  • Improved and consistent saving, loading and serialization across objects, plus Pickle support.
  • PhraseMatcher for matching large terminology lists as Doc objects, plus revised Matcher API.
  • New CLI commands validate, vocab and evaluate, plus entry point for spacy command to use instead of python -m spacy.
  • Experimental GPU support via Chainer's CuPy module.

🔮 Models

spaCy v2.0 comes with 13 new convolutional neural network models for 7+ languages. The models have been designed and implemented from scratch specifically for spaCy. A novel bloom embedding strategy with subword features is used to support huge vocabularies in tiny tables.

All core models include part-of-speech tags, dependency labels and named entities. Small models include only context-specific token vectors, while medium-sized and large models ship with word vectors. For more details, see the models directory or try our new model comparison tool.

Name Language Features Size
en_core_web_sm English Tagger, parser, entities 35 MB
en_core_web_md English Tagger, parser, entities, vectors 115 MB
en_core_web_lg English Tagger, parser, entities, vectors 812 MB
en_vectors_web_lg English Vectors 627 MB
de_core_news_sm German Tagger, parser, entities 36 MB
es_core_news_sm Spanish Tagger, parser, entities 35 MB
es_core_news_md Spanish Tagger, parser, entities, vectors 93 MB
pt_core_news_sm Portuguese Tagger, parser, entities 36 MB
fr_core_news_sm French Tagger, parser, entities 37 MB
fr_core_news_md French Tagger, parser, entities, vectors 106 MB
it_core_news_sm Italian Tagger, parser, entities 34 MB
nl_core_news_sm Dutch Tagger, parser, entities 34 MB
xx_ent_wiki_sm Multi-language Entities 33MB

You can download a model by using its name or shortcut. To load a model, use spacy.load(), or import it as a module and call its load() method:

spacy download en_core_web_sm
import spacy
nlp = spacy.load('en_core_web_sm')

import en_core_web_sm
nlp = en_core_web_sm.load()

📈 Benchmarks

spaCy v2.0's new neural network models bring significant improvements in accuracy, especially for English Named Entity Recognition. The new en_core_web_lg model makes about 25% fewer mistakes than the corresponding v1.x model and is within 1% of the current state-of-the-art (Strubell et al., 2017). The v2.0 models are also cheaper to run at scale, as they require under 1 GB of memory per process.

English

Model spaCy Type UAS LAS NER F POS Size
en_core_web_sm-2.0.0 v2.x neural 91.7 89.8 85.3 97.0 35MB
en_core_web_md-2.0.0 v2.x neural 91.7 89.8 85.9 97.1 115MB
en_core_web_lg-2.0.0 v2.x neural 91.9 90.1 85.9 97.2 812MB
en_core_web_sm-1.1.0 v1.x linear 86.6 83.8 78.5 96.6 50MB
en_core_web_md-1.2.1 v1.x linear 90.6 88.5 81.4 96.7 1GB

Spanish

Model spaCy Type UAS LAS NER F POS Size
es_core_news_sm-2.0.0 v2.x neural 89.8 86.8 88.7 96.9 35MB
es_core_news_md-2.0.0 v2.x neural 90.2 87.2 89.0 97.8 93MB
es_core_web_md-1.1.0 v1.x linear 87.5 n/a 94.2 96.7 377MB

For more details of the other models, see the models directory and model comparison tool.

🔴 Bug fixes

  • Fix issue #125, #228, #299, #377, #460, #606, #930: Add full Pickle support.
  • Fix issue #152, #264, #322, #343, #437, #514, #636, #785, #927, #985, #992, #1011: Fix and improve serialization and deserialization of Doc objects.
  • Fix issue #285, #1225: Fix memory growth problem when streaming data.
  • Fix issue #512: Improve parser to prevent it from returning two ROOT objects.
  • Fix issue #519, #611, #725: Retrain German model with better tokenized input.
  • Fix issue #524: Improve parser and handling of noun chunks.
  • Fix issue #621: Prevent double spaces from changing the parser result.
  • Fix issue #664, #999, #1026: Fix bugs that would prevent loading trained NER models.
  • Fix issue #671, #809, #856: Fix importing and loading of word vectors.
  • Fix issue #683, #1052, #1442: Don't require tag maps to provide SP tag.
  • Fix issue #753: Resolve bug that would tag OOV items as personal pronouns.
  • Fix issue #860, #956, #1085, #1381: Allow custom attribute extensions on Doc, Token and Span.
  • Fix issue #905, #954, #1021, #1040, #1042: Improve parsing model and allow faster accuracy updates.
  • Fix issue #933, #977, #1406: Update online demos.
  • Fix issue #995: Improve punctuation rules for Hebrew and other non-latin languages.
  • Fix issue #1008: train command finally works correctly if used without dev_data.
  • Fix issue #1012: Improve word vectors documentation.
  • Fix issue #1043: Improve NER models and allow faster accuracy updates.
  • Fix issue #1044: Fix bugs in French model and improve performance.
  • Fix issue #1051: Improve error messages if functionality needs a model to be installed.
  • Fix issue #1071: Correct typo of "whereve" in English tokenizer exceptions.
  • Fix issue #1088: Emoji are now split into separate tokens wherever possible.
  • Fix issue #1240: Allow merging Spans without keyword arguments.
  • Fix issue #1243: Resolve undefined names in deprecated functions.
  • Fix issue #1250: Fix caching bug that would cause tokenizer to ignore special case rules after first parse.
  • Fix issue #1257: Ensure the compare operator == works as expected on tokens.
  • Fix issue #1291: Improve documentation of training format.
  • Fix issue #1336: Fix bug that caused inconsistencies in NER results.
  • Fix issue #1375: Make sure Token.nbor raises IndexError correctly.
  • Fix issue #1450: Fix error when OP quantifier "*" ends the match pattern.
  • Fix issue #1452: Fix bug that would mutate the original text.

📖 Documentation and examples

⚠️ Backwards incompatibilities

For the complete table and more details, see the guide on what's new in v2.0.

Note that the old v1.x models are not compatible with spaCy v2.0.0. If you've trained your own models, you'll have to re-train them to be able to use them with the new version. For a full overview of changes in v2.0, see the documentation and guide on migrating from spaCy 1.x.

Document processing

The Language.pipe method allows spaCy to batch documents, which brings a significant performance advantage in v2.0. The new neural networks introduce some overhead per batch, so if you're processing a number of documents in a row, you should use nlp.pipe and process the texts as a stream.

docs = nlp.pipe(texts)
# BAD: docs = (nlp(text) for text in texts)

To make usage easier, there's now a boolean as_tuples keyword argument, that lets you pass in an iterator of (text, context) pairs, so you can get back an iterator of (doc, context) tuples.

Loading models

spacy.load() is now only intended for loading models – if you need an empty language class, import it directly instead, e.g. from spacy.lang.en import English. If the model you're loading is a shortcut link or package name, spaCy will expect it to be a model package, import it and call its load() method. If you supply a path, spaCy will expect it to be a model data directory and use the meta.json to initialise a language class and call nlp.from_disk() with the data path.

nlp = spacy.load('en')
nlp = spacy.load('en_core_web_sm')
nlp = spacy.load('/model-data')
nlp = English().from.disk('/model-data')
# OLD: nlp = spacy.load('en', path='/model-data')

Training

All built-in pipeline components are now subclasses of Pipe, fully trainable and serializable, and follow the same API. Instead of updating the model and telling spaCy when to stop, you can now explicitly call begin_training, which returns an optimizer you can pass into the update function. While update still accepts sequences of Doc and GoldParse objects, you can now also pass in a list of strings and dictionaries describing the annotations. This is the recommended usage, as it removes one layer of abstraction from the training.

optimizer = nlp.begin_training()
for itn in range(1000):
    for texts, annotations in train_data:
        nlp.update(texts, annotations, sgd=optimizer)
nlp.to_disk('/model')

Serialization

spaCy's serialization API is now consistent across objects. All containers and pipeline components have .to_disk(), .from_disk(), .to_bytes() and .from_bytes() methods.

nlp.to_disk('/model')
nlp.vocab.to_disk('/vocab')
# OLD: nlp.save_to_directory('/model')

Processing pipelines and attribute extensions

Models can now define their own processing pipelines as a list of strings, mapping to component names. Components receive a Doc, modify it and return it to be processed by the next component in the pipeline. You can add custom components to nlp.pipeline and create extensions to add custom attributes, properties and methods to the Doc, Token and Span objects.

nlp = spacy.load('en')
my_component = MyComponent()
nlp.add_pipe(my_component, before='tagger')

Doc.set_extension('my_attr', default=True)
doc = nlp(u"This is a text.")
assert doc._.my_attr

👥 Contributors

This release is brought to you by @honnibal and @ines. Thanks to @Gregory-Howard, @luvogels, @ferdous-al-imran, @uetchy, @akYoung, @kengz, @raphael0202, @ardeego, @yuvalpinter, @dvsrepo, @frascuchon, @oroszgy, @v3t3a, @Tpt, @thinline72, @jarle, @jimregan, @nkruglikov, @delirious-lettuce, @geovedi, @wannaphongcom, @h4iku, @IamJeffG, @binishkaspar, @ramananbalakrishnan, @jerbob92, @mayukh18, @abhi18av and @uwol for the pull requests and contributions. Also thanks to everyone who submitted bug reports and took the spaCy user survey – your feedback made a big difference!

Don't miss a new spacy release

NewReleases is sending notifications on new releases.