github tastyware/streaq v4.0.0
tastyware/streaq:v4.0.0

latest releases: v5.2.2, v5.2.1, v5.2.0...
one month ago

What's Changed

  • begin adding web UI by @Graeme22 in #30
    New major release! Plenty of breaking changes, but most should be welcome.
    • Simple web UI added for monitoring tasks. Try it with streaq test.worker --web! The UI is written with FastAPI, Bootstrap and HTMX and implemented in a way that allows easily mounting it onto a separate FastAPI app. (You'll need to install with the web extra: pip install -U streaq[web])

      20250702_13h39m41s_grim

    • Task priorities have been completely rewritten. Previously, there was an enum of 3 hard-coded priorities, TaskPriority, which has been replaced with a Worker parameter, priorities, which allows for creating an arbitrary number of user-defined priorities and defaults to a single priority for those who don't need this functionality. Additionally, delayed tasks can be given a priority, which wasn't possible before.

      # this list should be ordered from lowest to highest
      worker = Worker(priorities=["low", "high"])
      
      async with worker:
          await sleeper.enqueue(3).start(priority="low")
    • Use of pickle.loads, the default deserializer, is a security risk as an attacker who gains access to the Redis database would be able to run arbitrary code. You can now protect against this attack vector by passing a signing_secret to the worker. The signing key ensures corrupted data from Redis will not be unpickled.

      worker = Worker(signing_secret="MY-SECRET-KEY")
    • Reliability and pessimistic execution improved across the entire codebase. A new parameter, Worker.idle_timeout, allows for detecting stale tasks that have been prefetched (and sometimes even begun execution). Tasks are more easily reclaimed when workers are shut down incorrectly and several edge cases have been eliminated. Prefetched tasks are now returned immediately to the queue on worker shutdown.

    • New function added, Worker.enqueue_many(), which allows for batching together enqueue calls into a single Redis pipeline for greater efficiency.

      async with worker:
          # importantly, we're not using `await` here
          tasks = [sleeper.enqueue(i) for i in range(10)]
          await worker.enqueue_many(tasks)
    • Many Lua scripts have been made more efficient, and several scripts have been combined to further boost efficiency.

    • TaskData has been renamed to TaskInfo and now includes additional properties dependents and dependencies.

    • TaskResult now includes additional properties fn_name and enqueue_time, and no longer includes queue_name which was redundant.

    • Task.then() typing improved

    • Worker.queue_fetch_limit renamed to Worker.prefetch. Prefetching can be disabled entirely by setting this to 0.

    • Many exceptions now contain additional stack trace info

    • Tests now use separate queue names for better isolation

    • Worker.with_scheduler has been eliminated, as listen_queue and listen_stream have been combined into a single, more efficient loop, meaning all workers can schedule tasks. Polling is now eliminated entirely from the library.

    • Several internal worker functions have been combined into more efficient versions

    • Worker shutdown no longer kills tasks prematurely in some circumstances

    • Task abortion now works immediately for delayed tasks that are still enqueued

Full Changelog: v3.0.0...v4.0.0

Don't miss a new streaq release

NewReleases is sending notifications on new releases.