Optional memory management for persistent services
Support a new context manager method Language.memory_zone()
, to allow long-running services to avoid growing memory usage from cached entries in the Vocab
or StringStore
. Once the memory zone block ends, spaCy will evict Vocab
and StringStore
entries that were added during the block, freeing up memory. Doc
objects created inside a memory zone block should not be accessed outside the block.
The current implementation disables population of the tokenizer cache inside the memory zone, resulting in some performance impact. The performance difference will likely be negligible if you're running a full pipeline, but if you're only running the tokenizer, it'll be much slower. If this is a problem, you can mitigate it by warming the cache first, by processing the first few batches of text without creating a memory zone. Support for memory zones in the tokenizer will be added in a future update.
The Language.memory_zone()
context manager also checks for a memory_zone()
method on pipeline components, so that components can perform similar memory management if necessary. None of the built-in components currently require this.
If you component needs to add non-transient entries to the StringStore
or Vocab
, you can pass the allow_transient=False
flag to the Vocab.add()
or StringStore.add()
components.
Example usage:
import spacy
import json
from pathlib import Path
from typing import Iterator
from collections import Counter
import typer
from spacy.util import minibatch
def texts(path: Path) -> Iterator[str]:
with path.open("r", encoding="utf8") as file_:
for line in file_:
yield json.loads(line)["text"]
def main(jsonl_path: Path) -> None:
nlp = spacy.load("en_core_web_sm")
counts = Counter()
batches = minibatch(texts(jsonl_path), 1000)
for i, batch in enumerate(batches):
print("Batch", i)
with nlp.memory_zone():
for doc in nlp.pipe(batch):
for token in doc:
counts[token.text] += 1
for word, count in counts.most_common(100):
print(count, word)
if __name__ == "__main__":
typer.run(main)
Numpy v2 compatibility
Numpy 2.0 isn't binary-compatible with numpy v1, so we need to build against one or the other. This release isolates the dependency change and has no other changes, to make things easier if the dependency change causes problems.
This dependency change was previously attempted in version 3.7.6, but dependencies within the v3.7 family of models resulted in some conflicts, and some packages depending on numpy v1 were incompatible with v3.7.6. I've therefore removed the 3.7.6 release and replaced it with this one, which increments the minor version.
Model packages no longer list spacy as a requirement
I've also made a change to the way models are packaged to make it easier to release more quickly. Previously spaCy models specified a versioned requirement on spacy itself. This meant that there was no way to increment the spaCy version and have it work with the existing models, because the models would specify they were only compatible with spacy>=3.7.0,<3.8.0
. We have a compatibility table that allows spacy to see which models are compatible, but the models themselves can't know which future versions of spaCy they work with.
I've therefore added a flag --require-parent/--no-require-parent
to the spacy package
CLI, which controls where the parent package (e.g. spaCy) should be listed as a requirement of the model. --require-parent
is the default for v3.8, but this will change to --no-require-parent
by default in v4. I've set --no-require-parent
for the v3.8 models, so that further changes can be published that don't impact the models, without retraining the models or forcing users to redownload them.