pypi bm25s 0.2.0
v0.2.0: Numba support, new `Tokenizer` class, more stopwords

latest releases: 0.2.3, 0.2.2, 0.2.1...
one month ago

Version 0.2.0 is an exciting release! This brings a lot of new features, including numba support (over 2x faster in many cases), stopwords for 10 new languages (thank you @bm777), a new Tokenizer class (faster and more flexible), document weighting at retrieval time, a new JSON backend (orjson), improvements to utils for using BEIR, and many new examples! Hope you enjoy this new release!

Numba JIT support

See discussion here: #46

The most important new feature of v0.2.0 is the addition of numba support, which only require you to install the core requirements (with pip install "bm25s[core]") or with pip install numba.

Using numba will result in a substantial speedup, so it is highly recommended if you have access to numba on your system (which should be in most cases). You can find a benchmark here.

Notably, by combining numba JIT-based scoring, numba-based top-k selection (no longer relies on jax, see discussion thread) and the new and faster bm25s.tokenization.Tokenizer (see below), we observe the following speedup on a few benchmarks, in a single-threaded setting with Kaggle CPUs:

  • MSMarco: 12.2 --> 39.18
  • HotpotQA: 20.88 --> 47.16
  • Fever: 20.19 --> 53.84
  • NQ: 41.85 --> 109.47
  • Quora: 272.04 --> 479.71
  • NFCorpus: 1196.16 --> 5696.21

To enable it, simply do:

import bm25s

# load corpus
# ...

retriever = bm25s.BM25(backend="numba")

# index and run retrieval

This is all you need to use numba JIT when calling the retriever.retrieve method. Note, however, that the first run might be slower, so you can warmup by passing a small query. Here are more examples:

New bm25s.tokenization.Tokenizer class

With v0.2.0, we are adding the Tokenizer class, which enhances the existing features of bm25s.tokenize and makes it more flexible. Notably, it enables generator mode (stream with yield), and is much faster when tokenizing queries, if you have an existing vocabulary. Also, you can specify your own splitter function, which is no longer locked to a regex pattern.

You can find more information here:

New stopwords

Stopwords for 10 languages (from NLTK) were added by @bm777 in #33

  • English
  • German
  • Dutch
  • French
  • Spanish
  • Portuguese
  • Italian
  • Russian
  • Swedish
  • Norwegian
  • Chinese

New JSON backend

orjson is now supported as a JSON backend, as it is faster than ujson and is currently supported.

Weight mask

BM25.retrieve now supports a weight_mask array, which applies a weight (binary or float) on each of the document retrieved. This is useful, for example, if you want to use a binary mask to hide certain documents deemed irrelevant.

Dependency Notes

  • orjson replaces ujson as a core dependency
  • jax[cpu] is no longer a core dependency, but a selection dependency now. Be careful to not use backend_selection='jax' if you don't have it installed!
  • numba is a new core dependency, allowing you to directly use the backend='numba' when initializing a retriever.
  • pytrec_eval is a new evaluation dependency, which is useful if you want to use the evaluation function in bm25s.utils.beir which is copied from the BEIR dataset.

Advanced Numba

Alternative Usage (advanced)

Here's an example of how to leverage numba speedups using the alternative method of activing numba scorer and choosing the backend_selection manually. It is not recommended to use this method unless you speicfically want to have more control over how the backend is activated.

import os
import Stemmer

import bm25s.hf

def main(repo_name="xhluca/bm25s-fiqa-index"):
    queries = [
        "Is chemotherapy effective for treating cancer?",
        "Is Cardiac injury is common in critical cases of COVID-19?",
    ]

    retriever = bm25s.hf.BM25HF.load_from_hub(
        repo_name, load_corpus=False, mmap=False
    )

    # Tokenize the queries
    stemmer = Stemmer.Stemmer("english")
    queries_tokenized = bm25s.tokenize(queries, stemmer=stemmer)

    # Retrieve the top-k results
    retriever.activate_numba_scorer()
    results = retriever.retrieve(queries_tokenized, k=3, backend_selection="numba")
    # show first results
    result = results.documents[0]
    print(f"First score (# 1 result):{results.scores[0, 0]}")
    print(f"First result (# 1 result):\n{result[0]}")

if __name__ == "__main__":
    main()

Again, this method is only recommended if you want to have more control.

WARNING: it will not do well with multithreading. For the full example, see retrieve_with_numba_advanced.py

Don't miss a new bm25s release

NewReleases is sending notifications on new releases.