This is a small hotfix for the 0.12.0 release.
- fix: Fix slice call in the content truncation logic which was resulting in excessive usage of context tokens. Fixes #94 by @MohamedBassem in #4629dac
The bug fixed in this release resulted into mistakingly truncating the first 1500 words in large contents during tag inference (instead of taking only the first 1500 words). This had two side effects for bookmarks with a lot of content (> 1500 words):
- Due to the truncation of the first 1500 words, you might have got some sub optimal tags for content with >1500 words.
- For bookmarks with huge content size, you might have got inference failures due to hitting the context limits of the models you're using. And for the ones that were under the context size, they might have taken longer and used more credits than intended.
If this is a problem for you, you can trigger a re-index for the affected bookmarks.