⚠️ This version of spaCy requires downloading new models. You can use the
spacy validate
command to find out which models need updating, and print update instructions. If you've been training your own models, you'll need to retrain them with the new version.
✨ New features and improvements
Tagger, Parser, NER and Text Categorizer
- NEW: Experimental ULMFit/BERT/Elmo-like pretraining (see #2931) via the new
spacy pretrain
command. This pre-trains the CNN using BERT's cloze task. A new trick we're calling Language Modelling with Approximate Outputs is used to apply the pre-training to smaller models. The pre-training outputs CNN and embedding weights that can be used inspacy train
, using the new-t2v
argument. - NEW: Allow parser to do joint word segmentation and parsing. If you pass in data where the tokenizer over-segments, the parser now learns to merge the tokens.
- Make parser, tagger and NER faster, through better hyperparameters.
- Add simpler, GPU-friendly option to
TextCategorizer
, and allow settingexclusive_classes
andarchitecture
arguments on initialization. - Add
EntityRecognizer.labels
property. - Remove document length limit during training, by implementing faster Levenshtein alignment.
- Use Thinc v7.0, which defaults to single-thread with fast
blis
kernel for matrix multiplication. Parallelisation should be performed at the task level, e.g. by running more containers.
Models & Language Data
- NEW: 2-3 times faster tokenization across all languages at the same accuracy!
- NEW: Small accuracy improvements for parsing, tagging and NER for 6+ languages.
- NEW: The English and German models are now available under the MIT license.
- NEW: Statistical models for Greek.
- NEW: Alpha support for Tamil, Ukrainian and Kannada, and base language classes for Afrikaans, Bulgarian, Czech, Icelandic, Lithuanian, Latvian, Slovak, Slovenian and Albanian.
- Improve loading time of
French
by ~30%. - Add
Vocab.writing_system
(populated via the language data) to expose settings like writing direction.
CLI
- NEW:
pretrain
command for ULMFit/BERT/Elmo-like pretraining (see #2931). - NEW: New
ud-train
command, to train and evaluate using the CoNLL 2017 shared task data. - Check if model is already installed before downloading it via
spacy download
. - Pass additional arguments of
download
command topip
to customise installation. - Improve
train
command by lettingGoldCorpus
stream data, instead of loading into memory. - Improve
init-model
command, including support for lexical attributes and word-vectors, using a variety of formats. This replaces thespacy vocab
command, which is now deprecated. - Add support for multi-task objectives to
train
command. - Add support for data-augmentation to
train
command.
Other
- NEW: Enhanced pattern API for rule-based
Matcher
(see #1971). - NEW:
Doc.retokenize
context manager for merging and splitting tokens more efficiently. - NEW: Add support for custom pipeline component factories via entry points (#2348).
- NEW: Implement fastText vectors with subword features.
- NEW: Built-in rule-based NER component to add entities based on match patterns (see #2513).
- NEW: Allow
PhraseMatcher
to match on token attributes other thanORTH
, e.g.LOWER
(for case-insensitive matching) or evenPOS
orTAG
. - NEW: Replace
ujson
,msgpack
,msgpack-numpy
,pickle
,cloudpickle
anddill
with our own packagesrsly
to centralise dependencies and allow binary wheels. - NEW:
Doc.to_json()
method which outputs data in spaCy's training format. This will be the only place where the format is hard-coded (see #2932). - NEW: Built-in
EntityRuler
component to make it easier to build rule-based NER and combinations of statistical and rule-based systems. - NEW:
gold.spans_from_biluo_tags
helper that returnsSpan
objects, e.g. to overwrite thedoc.ents
. - Add warnings if
.similarity
method is called with empty vectors or without word vectors. - Improve rule-based
Matcher
and addreturn_matches
keyword argument toMatcher.pipe
to yield(doc, matches)
tuples instead of onlyDoc
objects, andas_tuples
to add context to theDoc
objects. - Make stop words via
Token.is_stop
andLexeme.is_stop
case-insensitive. - Accept
"TEXT"
as an alternative to"ORTH"
inMatcher
patterns. - Use
black
for auto-formatting.py
source and optimse codebase usingflake8
. You can now runflake8 spacy
and it should return no errors or warnings. SeeCONTRIBUTING.md
for details.
🔴 Bug fixes
- Fix issue #795: Fix behaviour of
Token.conjuncts.
- Fix issue #1487: Add
Doc.retokenize()
context manager. - Fix issue #1537: Make
Span.as_doc
return a copy, not a view. - Fix issue #1574: Make sure stop words are available in medium and large English models.
- Fix issue #1585: Prevent parser from predicting unseen classes.
- Fix issue #1642: Replace
regex
withre
and speed up tokenization. - Fix issue #1665: Correct typos in symbol
Animacy_inan
and addAnimacy_nhum
. - Fix issue #1748, #1798, #2756, #2934: Add simpler GPU-friendly option to
TextCategorizer
. - Fix issue #1773: Prevent tokenizer exceptions from setting
POS
but notTAG
. - Fix issue #1782, #2343: Fix training on GPU.
- Fix issue #1816: Allow custom
Language
subclasses via entry points. - Fix issue #1865: Correct licensing of
it_core_news_sm
model. - Fix issue #1889: Make stop words case-insensitive.
- Fix issue #1903: Add
relcl
dependency label to symbols. - Fix issue #1963: Resize
Doc.tensor
when merging spans. - Fix issue #1971: Update
Matcher
engine to support regex, extension attributes and rich comparison. - Fix issue #2014: Make
Token.pos_
writeable. - Fix issue #2091: Fix
displacy
support for RTL languages. - Fix issue #2203, #3268: Prevent bad interaction of lemmatizer and tokenizer exceptions.
- Fix issue #2329: Correct
TextCategorizer
andGoldParse
API docs. - Fix issue #2369: Respect pre-defined warning filters.
- Fix issue #2390: Support setting lexical attributes during retokenization.
- Fix issue #2396: Fix
Doc.get_lca_matrix
. - Fix issue #2464, #3009: Fix behaviour of
Matcher
's?
quantifier. - Fix issue #2482: Fix serialization when parser model is empty.
- Fix issue #2512, #2153: Fix issue with deserialization into non-empty vocab.
- Fix issue #2603: Improve handling of missing NER tags.
- Fix issue #2644: Add table explaining training metrics to docs.
- Fix issue #2648: Fix
KeyError
inVectors.most_similar
. - Fix issue #2671, #2675: Fix incorrect match ID on some patterns.
- Fix issue #2693: Only use
'sentencizer'
as built-in sentence boundary component name. - Fix issue #2728: Fix HTML escaping in
displacy
NER visualization and correct API docs. - Fix issue #2740: Add ability to pass additional arguments to pipeline components.
- Fix issue #2754, #3028: Make
NORM
aToken
attribute instead of aLexeme
attribute to allow setting context-specific norms in tokenizer exceptions. - Fix issue #2769: Fix issue that'd cause segmentation fault when calling
EntityRecognizer.add_label
. - Fix issue #2772: Fix bug in sentence starts for non-projective parses.
- Fix issue #2779: Fix handling of pre-set entities.
- Fix issue #2782: Make
like_num
work with prefixed numbers. - Fix issue #2833: Raise better error if
Token
orSpan
are pickled. - Fix issue #2838: Add
Retokenizer.split
method to split one token into several. - Fix issue #2869: Make
doc[0].is_sent_start == True
. - Fix issue #2870: Make it illegal for the entity recognizer to predict whitespace tokens as
B
,L
orU
. - Fix issue #2871: Fix vectors for reserved words.
- Fix issue #2901: Fix issue with first call of
nlp
in Japanese (MeCab). - Fix issue #2924: Make IDs of displaCy arcs more unique to avoid clashes.
- Fix issue #3012: Fix clobber of
Doc.is_tagged
inDoc.from_array
. - Fix issue #3027: Allow
Span
to take unicode value forlabel
argument. - Fix issue #3036: Support mutable default arguments in extension attributes.
- Fix issue #3048: Raise better errors for uninitialized pipeline components.
- Fix issue #3064: Allow single string attributes in
Doc.to_array
. - Fix issue #3093, #3067: Set
vectors.name
correctly when exporting model via CLI. - Fix issue #3112: Make sure entity types are added correctly on GPU.
- Fix issue #3191: Fix pickling of
Japanese
. - Fix issue #3122: Correct docs of
Token.subtree
andSpan.subtree
. - Fix issue #3128: Improve error handling in converters.
- Fix issue #3248: Fix
PhraseMatcher
pickling and make__len__
consistent. - Fix issue #3274: Make
Token.sent
work as expected without the parser. - Fix issue #3277: Add en/em dash to tokenizer prefixes and suffixes.
- Fix issue #3346: Expose Japanese stop words in language class.
- Fix issue #3357: Update displaCy examples in docs to correctly show
Token.pos_
. - Fix issue #3345: Fix NER when preset entities cross-sentence boundaries.
- Fix issue #3348: Don't use
numpy
directly for similarity. - Fix issue #3366: Improve converters, training data formats and docs.
- Fix issue #3369: Fix
#egg
fragments in direct downloads. - Fix issue #3382: Make
Doc.from_array
consistent withDoc.to_array
. - Fix issue #3398: Don't set extension attributes in language classes.
- Fix issue #3373: Merge and improve
conllu
converters. - Fix serialization of custom tokenizer if not all functions are defined.
- Fix bugs in beam-search training objective.
- Fix problems with model pickling.
⚠️ Backwards incompatibilities
- This version of spaCy requires downloading new models. You can use the
spacy validate
command to find out which models need updating, and print update instructions. - If you've been training your own models, you'll need to retrain them with the new version.
- Due to difficulties linking our new
blis
for faster platform-independent matrix multiplication, v2.1.x currently doesn't work on Python 2.7 on Windows. We expect this to be corrected in the future. - While the
Matcher
API is fully backwards compatible, its algorithm has changed to fix a number of bugs and performance issues. This means that theMatcher
inv2.1.x
may produce different results compared to theMatcher
inv2.0.x
.
- The deprecated
Doc.merge
andSpan.merge
methods still work, but you may notice that they now run slower when merging many objects in a row. That's because the merging engine was rewritten to be more reliable and to support more efficient merging in bulk. To take advantage of this, you should rewrite your logic to use theDoc.retokenize
context manager and perform as many merges as possible together in thewith
block.
- doc[1:5].merge()
- doc[6:8].merge()
+ with doc.retokenize() as retokenizer:
+ retokenizer.merge(doc[1:5])
+ retokenizer.merge(doc[6:8])
- The serialization methods
to_disk
,from_disk
,to_bytes
andfrom_bytes
now support a singleexclude
argument to provide a list of string names to exclude. The docs have been updated to list the available serialization fields for each class. Thedisable
argument on theLanguage
serialization methods has been renamed toexclude
for consistency.
- nlp.to_disk("/path", disable=["parser", "ner"])
+ nlp.to_disk("/path", exclude=["parser", "ner"])
- data = nlp.tokenizer.to_bytes(vocab=False)
+ data = nlp.tokenizer.to_bytes(exclude=["vocab"])
- The
.pos
value for several common English words has changed, due to corrections to long-standing mistakes in the English tag map (see #593, #3311). - For better compatibility with the Universal Dependencies data, the lemmatizer now preserves capitalization, e.g. for proper nouns (see #3256).
- The keyword argument
n_threads
on the.pipe
methods is now deprecated, as the v2.x models cannot release the global interpreter lock. (Future versions may introduce an_process
argument for parallel inference via multiprocessing.) - The
Doc.print_tree
method is not deprecated in favour of a unifiedDoc.to_json
method, which outputs data in the same format as the expected JSON training data. - The built-in rule-based sentence boundary detector is now only called
'sentencizer'
– the name'sbd'
is deprecated.
- sentence_splitter = nlp.create_pipe('sbd')
+ sentence_splitter = nlp.create_pipe('sentencizer')
- The
is_sent_start
attribute of the first token in aDoc
now correctly defaults toTrue
. It previously defaulted toNone
. - The
spacy train
command now lets you specify a comma-separated list of pipeline component names, instead of separate flags like--no-parser
to disable components. This is more flexible and also handles custom components out-of-the-box.
- $ spacy train en /output train_data.json dev_data.json --no-parser
+ $ spacy train en /output train_data.json dev_data.json --pipeline tagger,ner
- The
spacy init-model
command now uses a--jsonl-loc
argument to pass in a a newline-delimited JSON (JSONL) file containing one lexical entry per line instead of a separate--freqs-loc
and--clusters-loc
.
- $ spacy init-model en ./model --freqs-loc ./freqs.txt --clusters-loc ./clusters.txt
+ $ spacy init-model en ./model --jsonl-loc ./vocab.jsonl
- Also note that some of the model licenses have changed:
it_core_news_sm
is now correctly licensed under CC BY-NC-SA 3.0, and all English and German models are now published under the MIT license.
📈 Benchmarks
Model | Language | Version | UAS | LAS | POS | NER F | Vec | Size |
---|---|---|---|---|---|---|---|---|
en_core_web_sm
| English | 2.1.0 | 91.5 | 89.7 | 96.8 | 85.9 | 𐄂 | 10 MB |
en_core_web_md
| English | 2.1.0 | 91.8 | 90.0 | 96.9 | 86.6 | ✓ | 90 MB |
en_core_web_lg
| English | 2.1.0 | 91.8 | 90.1 | 97.0 | 86.6 | ✓ | 788 MB |
de_core_news_sm
| German | 2.1.0 | 90.7 | 88.6 | 96.3 | 83.1 | 𐄂 | 10 MB |
de_core_news_md
| German | 2.1.0 | 91.2 | 89.4 | 96.6 | 83.8 | ✓ | 210 MB |
es_core_news_sm
| Spanish | 2.1.0 | 90.4 | 87.3 | 96.9 | 89.5 | 𐄂 | 10 MB |
es_core_news_md
| Spanish | 2.1.0 | 91.0 | 88.2 | 97.2 | 89.7 | ✓ | 69 MB |
pt_core_news_sm
| Portuguese | 2.1.0 | 89.1 | 85.9 | 80.4 | 88.9 | 𐄂 | 12 MB |
fr_core_news_sm
| French | 2.1.0 | 87.6 | 84.7 | 94.5 | 82.6 | 𐄂 | 14 MB |
fr_core_news_md
| French | 2.1.0 | 89.1 | 86.4 | 95.3 | 83.1 | ✓ | 82 MB |
it_core_news_sm
| Italian | 2.1.0 | 91.0 | 87.3 | 95.8 | 86.1 | 𐄂 | 10 MB |
nl_core_news_sm
| Dutch | 2.1.0 | 83.7 | 77.6 | 91.6 | 87.0 | 𐄂 | 10 MB |
el_core_news_sm
| Greek | 2.1.0 | 84.4 | 80.6 | 94.6 | 71.6 | 𐄂 | 10 MB |
el_core_news_md
| Greek | 2.1.0 | 88.3 | 85.0 | 96.6 | 81.1 | ✓ | 126 MB |
xx_ent_wiki_sm
| Multi | 2.1.0 | - | - | - | 81.3 | 𐄂 | 3 MB |
💬 UAS: Unlabelled dependencies (parser). LAS: Labelled dependencies (parser). POS: Part-of-speech tags (fine-grained tags, i.e.
Token.tag_
). NER F: Named entities (F-score). Vec: Model contains word vectors. Size: Model file size (zipped archive).
📖 Documentation and examples
Although it looks pretty much the same, we've rebuilt the entire documentation using Gatsby and MDX. It's now an even faster progressive web app and allows us to write all content entirely in Markdown, without having to compromise on easy-to-use custom UI components. We're hoping that the Markdown source will make it even easier to contribute to the documentation. For more details, check out the styleguide and source.
While converting the pages to Markdown, we've also fixed a bunch of typos, improved the existing pages and added some new content:
- Usage Guide: Rule-based Matching. How to use the
Matcher
,PhraseMatcher
and the newEntityRuler
, and write powerful components to combine statistical models and rules. - Usage Guide: Saving and Loading. Everything you need to know about serialization, and how to save and load pipeline components, package your spaCy models as Python modules and use entry points.
- Usage Guide: Merging and Splitting. How to retokenize a
Doc
using the newretokenize
context manager and merge spans into single tokens and split single tokens into multiple. - Universe: Videos and Podcasts
- API:
EntityRuler
- API:
SentenceSegmenter
- API: Pipeline functions
👥 Contributors
Thanks to @DuyguA, @giannisdaras, @mgogoulos, @louridas, @skrcode, @gavrieltal, @svlandeg, @jarib, @alvaroabascar, @kbulygin, @moreymat, @mirfan899, @ozcankasal, @willprice, @alvations, @amperinet, @retnuh, @Loghijiaha, @DeNeutoy, @gavrieltal, @boena, @BramVanroy, @pganssle, @foufaster, @adrianeboyd, @maknotavailable, @pierremonico, @lauraBaakman, @juliamakogon, @Gizzio, @Abhijit-2592, @akki2825, @grivaz, @roshni-b, @mpuig, @mikelibg, @danielkingai2, @adrienball and @Poluglottos for the pull requests and contributions.