github meilisearch/meilisearch v1.7.0
v1.7.0 ๐Ÿ‡

latest releases: prototype-control-prefixdb-1, prototype-upgrade-offline-v1_9-v1_10-0, latest...
6 months ago

Meilisearch v1.7.0 focuses on improving v1.6.0 features, indexing speed and hybrid search.

๐Ÿงฐ All official Meilisearch integrations (including SDKs, clients, and other tools) are compatible with this Meilisearch release. Integration deployment happens between 4 to 48 hours after a new version becomes available.

Some SDKs might not include all new featuresโ€”consult the project repository for detailed information. Is a feature you need missing from your chosen SDK? Create an issue letting us know you need it, or, for open-source karma points, open a PR implementing it (we'll love you for that โค๏ธ).

New features and improvements ๐Ÿ”ฅ

Improved AI-powered search โ€” Experimental

To activate AI-powered search, set vectorStore to true in the /experimental-features route. Consult the Meilisearch documentation for more information.

๐Ÿ—ฃ๏ธ This is an experimental feature and we need your help to improve it! Share your thoughts and feedback on this GitHub discussion.

New OpenAI embedding models

When configuring OpenAI embedders), you can now specify two new models:

  • text-embedding-3-small with a default dimension of 1536.
  • text-embedding-3-large with a default dimension of 3072.

These new models are cheaper and improve search result relevancy.

Custom OpenAI model dimensions

You can configure dimensions for sources using the new OpenAI models: text-embedding-3-small and text-embedding-3-large. Dimensions must be bigger than 0 and smaller than the model size:

"embedders": {
  "new_model": {
    "source": "openAi",
    "model": "text-embedding-3-large",
    "dimensions": 512 // must be >0, must be <= 3072 for "text-embedding-3-large"
  },
  "legacy_model": {
    "source": "openAi",
    "model": "text-embedding-ada-002"
  }
}

You cannot customize dimensions for older OpenAI models such as text-embedding-ada-002. Setting dimensions to any value except the default size of these models will result in an error.

Done in #4375 by @Gosti.

GPU support when computing Hugging Face embeddings

Activate CUDA to use Nvidia GPUs when computing Hugging Face embeddings. This can significantly improve embedding generation speeds.

To enable GPU support through CUDA for HuggingFace embedding generation:

  1. Install CUDA dependencies
  2. Clone and compile Meilisearch with the cuda feature: cargo build --release --package meilisearch --features cuda
  3. Launch your freshly compiled Meilisearch binary
  4. Activate vector search
  5. Add a Hugging Face embedder

Done by @dureuill in #4304.

Improved indexing speed and reduced memory crashes

Stabilized showRankingScoreDetails

The showRankingScoreDetails search parameter, first introduce as an experimental feature in Meilisearch v1.3.0, is now a stable feature.

Use it with the /search endpoint to view detailed scores per ranking rule for each returned document:

curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "Batman Returns", "showRankingScoreDetails": true }'

When showRankingScoreDetails is set to true, returned documents include a _rankingScoreDetails field:

"_rankingScoreDetails": {
  "words": {
    "order": 0,
    "matchingWords": 1,
    "maxMatchingWords": 1,
    "score": 1.0
  },
  "typo": {
    "order": 1,
    "typoCount": 0,
    "maxTypoCount": 1,
    "score": 1.0
  },
  "proximity": {
    "order": 2,
    "score": 1.0
  },
  "attribute": {
    "order": 3,
    "attributes_ranking_order": 0.8,
    "attributes_query_word_order": 0.6363636363636364,
    "score": 0.7272727272727273
  },
  "exactness": {
    "order": 4,
    "matchType": "noExactMatch",
    "matchingWords": 0,
    "maxMatchingWords": 1,
    "score": 0.3333333333333333
  }
}

Done by @dureuill in #4389.

Improved logging

Done by @irevoire in #4391

Log output modified

Log messages now follow a different pattern:

# new format โœ…
2024-02-06T14:54:11Z INFO actix_server::builder: 200: starting 10 workers
# old format โŒ
[2024-02-06T14:54:11Z INFO  actix_server::builder] starting 10 workers

โš ๏ธ This change may impact you if you have any automated tasks based on log output.

Log output format โ€” Experimental

You can now configure Meilisearch to output logs in JSON.

Relaunch your instance passing json to the --experimental-logs-mode command-line option:

./meilisearch --experimental-logs-mode json

--experimental-logs-format accepts two values:

  • human: default human-readable output
  • json: JSON structured logs

๐Ÿ—ฃ๏ธ This feature is experimental and we need your help to improve it! Share your thoughts and feedback on this GitHub discussion.

โš ๏ธ Experimental features may be incompatible between Meilisearch versions.

New /logs/stream and /logs/stderr routes โ€” Experimental

Meilisearch v1.7 introduces 2 new experimental API routes: /logs/stream and /logs/stderr.

Use the /experimental-features route to activate both routes during runtime:

curl \
  -X PATCH 'http://localhost:7700/experimental-features/' \
  -H 'Content-Type: application/json'  \
--data-binary '{
    "logsRoute": true
  }'

๐Ÿ—ฃ๏ธ This feature is experimental, and we need your help to improve it! Share your thoughts and feedback on this GitHub discussion.

โš ๏ธ Experimental features may be incompatible between Meilisearch versions.

/logs/stream

Use the POST endpoint to output logs in a stream. The following example disables actix logging and keeps all other logs at the DEBUG level:

curl \
  -X POST http://localhost:7700/logs/stream \
  -H 'Content-Type: application/json' \
  --data-binary '{
      "mode": "human",
      "target": "actix=off,debug"
    }'

This endpoint requires two paramaters:

  • target: defines the log level and on which part of the engine you want to apply it. Must be a string formatted as code_part=log_level. Omit code_part= to set a single log level for the whole strram. Valid values for log level are: trace, debug, info, warn, error, or off
  • mode: accepts fmt (basic) or profile (verbose trace)

Use the DELETE endpoint of /logs/stream to interrupt a stream:

curl -X DELETE http://localhost:7700/logs/stream

You may only have one listener at a time. Meilisearch log streams are not compatible with xh or httpie.

/logs/stderr

Use the POST endpoint to configure the default log output for non-stream logs:

curl \
  -X POST http://localhost:7700/logs/stream \
  -H 'Content-Type: application/json' \
  --data-binary '{
      "target": "debug"
    }'

/logs/stderr accepts one parameter:

  • target: defines the log level and on which part of the engine you want to apply it. Must be a string formatted as code_part=log_level. Omit code_part= to set a single log level for the whole strram. Valid values for log level are: trace, debug, info, warn, error, or off

Other improvements

Fixes ๐Ÿž

Misc

โค๏ธ Thanks again to our external contributors:

Don't miss a new meilisearch release

NewReleases is sending notifications on new releases.