- BREAKING Added the
cleanup()
scheduler method and a configuration option (cleanup_interval
). A corresponding abstract method was added to theDataStore
class. This method purges expired job results and schedules that have exhausted their triggers and have no more associated jobs running. Previously, schedules were automatically deleted instantly once their triggers could no longer produce any fire times. - BREAKING Made publishing
JobReleased
events the responsibility of theDataStore
implementation, rather than the scheduler, for consistency with theacquire_jobs()
method - BREAKING The
started_at
field was moved fromJob
toJobResult
- BREAKING Removed the
from_url()
class methods ofSQLAlchemyDataStore
,MongoDBDataStore
andRedisEventBroker
in favor of the ability to pass a connection url to the initializer - Added the ability to pause and unpause schedules (PR by @WillDaSilva)
- Added the
scheduled_start
field to theJobAcquired
event - Added the
scheduled_start
andstarted_at
fields to theJobReleased
event - Fixed large parts of
MongoDBDataStore
still calling blocking functions in the event loop thread - Fixed JSON serialization of triggers that had been used at least once
- Fixed dialect name checks in the SQLAlchemy job store
- Fixed JSON and CBOR serializers unable to serialize enums
- Fixed infinite loop in CalendarIntervalTrigger with UTC timezone (PR by unights)
- Fixed scheduler not resuming job processing when
max_concurrent_jobs
had been reached and then a job was completed, thus making job processing possible again (PR by MohammadAmin Vahedinia) - Fixed the shutdown procedure of the Redis event broker
- Fixed
SQLAlchemyDataStore
not respecting custom schema name when creating enums - Fixed skipped intervals with overlapping schedules in
AndTrigger
(#911 <#911>_; PR by Bennett Meares) - Fixed implicitly created client instances in data stores and event brokers not being closed along with the store/broker