0.10.0 Edge of Glory
- A native scheduler with support for exactly-once, fault tolerant, timezone-aware scheduling.
A new Dagster daemon process has been added to manage your schedules and sensors with a
reconciliation loop, ensuring that all runs are executed exactly once, even if the Dagster daemon
experiences occasional failure. See the Migration Guide for
instructions on moving from
K8sSchedulerto the new scheduler.
- First-class sensors, built on the new Dagster daemon, allow you to instigate runs based on
changes in external state - for example, files on S3 or assets materialized by other Dagster
pipelines. See the Sensors Overview
for more information.
- Dagster now supports pipeline run queueing. You can apply instance-level run concurrency
limits and prioritization rules by adding the QueuedRunCoordinator to your Dagster instance. See
the Run Concurrency Overview
for more information.
IOManagerabstraction provides a new, streamlined primitive for granular control over where
and how solid outputs are stored and loaded. This is intended to replace the (deprecated)
intermediate/system storage abstractions, See the IO Manager Overview
for more information.
- A new Partitions page in Dagit lets you view your your pipeline runs organized by partition.
You can also launch backfills from Dagit and monitor them from this page.
- A new Instance Status page in Dagit lets you monitor the health of your Dagster instance,
with repository location information, daemon statuses, instance-level schedule and sensor
information, and linkable instance configuration.
- Resources can now declare their dependencies on other resources via the
- Our support for deploying on Kubernetes is now mature and battle-tested Our Helm chart is
now easier to configure and deploy, and we’ve made big investments in observability and
reliability. You can view Kubernetes interactions in the structured event log and use Dagit to
help you understand what’s happening in your deployment. The defaults in the Helm chart will
give you graceful degradation and failure recovery right out of the box.
- Experimental support for dynamic orchestration with the new
Dagster can now map the downstream dependencies over a dynamic output at runtime.
Dropping Python 2 support
- We’ve dropped support for Python 2.7, based on community usage and enthusiasm for Python 3-native
Removal of deprecated APIs
These APIs were marked for deprecation with warnings in the 0.9.0 release, and have been removed in
the 0.10.0 release.
- The decorator
input_hydration_confighas been removed. Use the
- The decorator
output_materialization_confighas been removed. Use
- The system storage subsystem has been removed. This includes
default_system_storage_defs. Use the new
IOManagersAPI instead. See
the IO Manager Overview for more
config_fieldargument on decorators and definitions classes has been removed and replaced
config_schema. This is a drop-in rename.
- The argument
step_keys_to_executeto the functions
reexecute_pipeline_iteratorhas been removed. Use the
step_selectionargument to select
subsets for execution instead.
- Repositories can no longer be loaded using the legacy
repositorykey in your
load_frominstead. See the
Workspaces Overview for
documentation about how to define a workspace.
Breaking API Changes
SolidExecutionResult.compute_output_event_dicthas been renamed to
SolidExecutionResult.compute_output_events_dict. A solid execution result is returned from
methods such as
result_for_solid. Any call sites will need to be updated.
.computesuffix is no longer applied to step keys. Step keys that were previously named
my_solid.computewill now be named
my_solid. If you are using any API method that takes a
step_selection argument, you will need to update the step keys accordingly.
pipeline_defproperty has been removed from the
InitResourceContextpassed to functions
- The schema for the
schedulervalues in the helm chart has changed. Instead of a simple toggle
on/off, we now require an explicit
scheduler.typeto specify usage of the
K8sScheduler, or otherwise. If your specified
required config, these fields must be specified under
snake_casefields have been changed to
camelCase. Please update your
- The Helm values
k8sRunLauncherhave now been consolidated under the Helm value
runLauncherfor simplicity. Use the field
runLauncher.typeto specify usage of the
CeleryK8sRunLauncher, or otherwise. By default, the
- All Celery message brokers (i.e. RabbitMQ and Redis) are disabled by default. If you are using
CeleryK8sRunLauncher, you should explicitly enable your message broker of choice.
userDeploymentsare now enabled by default.
- Event log messages streamed to
stderrhave been streamlined to be a single line
- Experimental support for memoization and versioning lets you execute pipelines incrementally,
selecting which solids need to be rerun based on runtime criteria and versioning their outputs
with configurable identifiers that capture their upstream dependencies.
To set up memoized step selection, users can provide a
function decides whether a given solid output needs to be computed or already exists. To execute
a pipeline with memoized step selection, users can supply the
dagster/is_memoized_run run tag
To set the version on a solid or resource, users can supply the
version field on the definition.
To access the derived version for a step output, users can access the
version field on the
OutputContext passed to the
load_input methods of
IOManager and the
has_output method of
- Schedules that are executed using the new
DagsterDaemonSchedulercan now execute in any
timezone by adding an
execution_timezoneparameter to the schedule. Daylight Savings Time
transitions are also supported. See the
Schedules Overview for
more information and examples.
- Countdown and refresh buttons have been added for pages with regular polling queries (e.g. Runs,
- Confirmation and progress dialogs are now presented when performing run terminations and
deletions. Additionally, hanging/orphaned runs can now be forced to terminate, by selecting
"Force termination immediately" in the run termination dialog.
- The Runs page now shows counts for "Queued" and "In progress" tabs, and individual run pages
show timing, tags, and configuration metadata.
- The backfill experience has been improved with means to view progress and terminate the entire
backfill via the partition set page. Additionally, errors related to backfills are now surfaced
- Shortcut hints are no longer displayed when attempting to use the screen capture command.
- The asset page has been revamped to include a table of events and enable organizing events by
partition. Asset key escaping issues in other views have been fixed as well.
- Miscellaneous bug fixes, frontend performance tweaks, and other improvements are also included.
- The Dagster Kubernetes documentation has been refreshed.
We've added schema validation to our Helm chart. You can now check that your values YAML file is
correct by running:
helm lint helm/dagster -f helm/dagster/values.yaml
Added support for resource annotations throughout our Helm chart.
Added Helm deployment of the dagster daemon & daemon scheduler.
Added Helm support for configuring a compute log manager in your dagster instance.
User code deployments now include a user
Changed the default liveness probe for Dagit to use
httpGet "/dagit_info"instead of
Added support for user code deployments on Kubernetes.
Added support for tagging pipeline executions.
Fixes to support version 12.0.0 of the Python Kubernetes client.
Improved implementation of Kubernetes+Dagster retries.
Many logging improvements to surface debugging information and failures in the structured event
Improved interrupt/termination handling in Celery workers.
Integrations & Libraries
Added a new
dagster-dockerlibrary with a
DockerRunLauncherthat launches each run in its own
Docker container. (See Deploying with Docker docs
for an example.)
Added support for AWS Athena. (Thanks @jmsanders!)
Added mocks for AWS S3, Athena, and Cloudwatch in tests. (Thanks @jmsanders!)
Allow setting of S3 endpoint through env variables. (Thanks @marksteve!)
Various bug fixes and new features for the Azure, Databricks, and Dask integrations.
create_databricks_job_solidfor creating solids that launch Databricks jobs.