Added
-
SSH Shell Configuration: Added
shellfield to SSH executor configuration for wrapping remote commands in a shell. This enables shell features like variable expansion, pipes, and command chaining on remote servers. Supports both DAG-level and step-level configuration, with step-levelshellfield as a fallback for convenience.ssh: user: deploy host: app.example.com shell: /bin/bash # Commands wrapped as: /bin/bash -c 'command' # Or array syntax: shell: ["/bin/bash", "-e"] steps: - command: echo $HOME && ls -la # Shell features now work
See SSH for full documentation.
-
Simplified Executor Syntax: Added
typeandconfigfields at step level as a cleaner alternative to theexecutorblock. Both syntaxes are fully supported. (#1525)# New shorthand syntax steps: - name: deploy type: ssh config: host: prod.example.com user: deploy command: ./deploy.sh # Equivalent verbose syntax (still works) steps: - name: deploy executor: type: ssh config: host: prod.example.com user: deploy command: ./deploy.sh
Note: Cannot mix
type/configwithexecutorfield in the same step. -
Chat Step Type: Added a new step type for integrating Large Language Models into workflows. Execute LLM requests to OpenAI, Anthropic, Google Gemini, OpenRouter, and local models (Ollama, vLLM). (#1548)
steps: - type: chat llm: provider: openai model: gpt-4o messages: - role: user content: "What is 2+2?" output: ANSWER
Key Features:
- Multi-provider support: OpenAI, Anthropic, Gemini, OpenRouter, and local OpenAI-compatible APIs (aliases:
ollama,vllm,llamamap tolocal) - DAG-level configuration: Define
llm:at DAG level to share settings across multiple chat steps - Multi-turn conversations: Steps inherit conversation history from dependencies via
depends, enabling context-aware AI workflows - Extended thinking mode: Enable deeper reasoning with
thinking.enabledand effort levels (low,medium,high,xhigh) - Streaming output: Response tokens stream to stdout by default (disable with
stream: false) - Automatic retry: Exponential backoff on transient errors (rate limits, server errors, timeouts)
See Chat for full documentation.
- Multi-provider support: OpenAI, Anthropic, Gemini, OpenRouter, and local OpenAI-compatible APIs (aliases:
-
Per-DAG Prometheus Metrics: Enhanced observability with granular per-DAG metrics and histograms. (#1411)
dagu_dag_runs_currently_running_by_dag- Running count per DAGdagu_dag_runs_queued_by_dag- Queue depth per DAGdagu_dag_runs_total_by_dag- Run counts by DAG and statusdagu_dag_run_duration_seconds- Duration histogram per DAGdagu_queue_wait_seconds- Queue wait time histogram per DAG
See Prometheus Metrics for full documentation.
-
Container Exec Mode: Execute commands in already-running containers instead of creating new ones. This enables running workflows in containers started by Docker Compose or other orchestration tools. (#1515)
String form - exec with container's default settings:
container: my-running-container steps: - command: php artisan migrate - command: php artisan cache:clear
Object form - with user, workingDir, and env overrides:
container: exec: my-running-container user: root workingDir: /var/www env: - APP_DEBUG=true steps: - command: composer install
Exec mode works at both DAG-level and step-level. The container must be running; Dagu waits up to 120 seconds for the container to be in running state.
See Container Field for full documentation.
-
Worker ID Tracking: Added worker ID tracking to DAG runs for distributed setups. Users can now see which worker executed their jobs in both the DAG runs list and detail views. (#1500)
- Local execution displays
localas the worker ID - Distributed execution displays the worker ID (format:
{hostname}@{pid}) - Worker ID is shown in the DAG runs table and run details panel
- Local execution displays
-
Configurable Cache Limits: Added
cacheconfiguration option with presets to control memory usage for in-memory caches. (#1411)cache: normal # options: low, normal, high (default: normal)
Or via environment variable:
DAGU_CACHE=lowPreset DAG DAGRun APIKey Webhook low500 5,000 100 100 normal1,000 10,000 500 500 high5,000 50,000 1,000 1,000 See Server Configuration for details.
-
DAGU_PARAMS_JSONavailability: Every step now receives the merged parameter payload as JSON via theDAGU_PARAMS_JSONenvironment variable, even when parameters are supplied through legacy CLI strings. If a run starts with raw JSON parameters, the original payload is preserved verbatim. This makes it easier for scripts to consume structured parameter data without re-parsing shell strings. (#1550) -
DAG Spec Tab in Status View: Added a new "Spec" tab to the DAG status page and DAG run details modal/panel. This tab displays the DAG YAML specification in readonly mode with the Schema Documentation sidebar available for reference. The spec shown is the exact spec that was used at execution time, not the current spec. (#XXXX)
-
Wait Status Email Notifications: Added
mailOn.waitandwaitMailconfiguration for sending email notifications when a DAG enters wait status (Human In The Loop). This enables teams to be notified when workflows require human approval.mailOn: wait: true waitMail: from: dagu@example.com to: approvers@example.com prefix: "[WAITING]" attachLogs: false
See Email Notifications for details.
-
HITL (Human-in-the-Loop): Added
hitlexecutor for pausing workflows until human approval. Enables approval gates where manual review is required before proceeding.steps: - command: ./deploy.sh staging - type: hitl config: prompt: "Approve production?" input: [APPROVED_BY] required: [APPROVED_BY] - command: ./deploy.sh production
Key features:
- Pause workflow execution for human review
- Collect parameters from approvers as environment variables
- Approve or reject via web UI or REST API
- New statuses:
waiting(paused for approval) andrejected(approval denied)
See HITL for full documentation.
-
Wait Handler: Added
handlerOn.waitlifecycle handler that executes when a workflow enters wait status.handlerOn: wait: command: notify-slack.sh "${DAG_WAITING_STEPS}" steps: - type: hitl
See Lifecycle Handlers for full documentation.
-
Chat Executor: Added a new executor for integrating Large Language Models into workflows. Supports OpenAI, Anthropic, Google Gemini, OpenRouter, and local models (Ollama, vLLM).
steps: - type: chat llm: provider: openai model: gpt-4o messages: - role: user content: "What is 2+2?" output: ANSWER
Key features:
- Multi-turn conversations: Steps inherit conversation history from dependencies
- Variable substitution: Message content supports
${VAR}syntax - Streaming: Response tokens are streamed to stdout by default
- Multiple providers:
openai,anthropic,gemini,openrouter,local - Automatic retry: Retries on rate limits and transient errors with exponential backoff
See Chat Executor for full documentation.
Changed
-
Metrics Endpoint Access Control: The
/api/v2/metricsendpoint now requires authentication by default for improved security. Configuremetrics: "public"or setDAGU_SERVER_METRICS=publicto restore the previous public access behavior. When private, use API tokens or basic auth for Prometheus scraping. (#1411)# Require authentication (new default) metrics: "private" # Allow public access (previous behavior) metrics: "public"
See Prometheus Metrics for configuration examples.
Fixed
-
Config Path Resolution: All configuration paths (DAGsDir, LogDir, DataDir, etc.) are now resolved to absolute paths at load time with proper error handling. Previously, path resolution failures were silently logged and the original unresolved path was used, which could cause mysterious runtime failures. Now, if any config path cannot be resolved to an absolute path, configuration loading fails with a clear error message.
-
Multiple dotenv files: Fixed loading of multiple
.envfiles. Previously, only the first file was processed. Now all files are loaded sequentially, with later files overriding values from earlier ones. Duplicate file paths are automatically deduplicated. Note:.envis always prepended to the list unlessdotenv: []is specified. (#1519)# All files are now loaded, with later files overriding earlier ones dotenv: - .env.defaults # .env loaded first (auto-prepended), then this - .env.local # Overrides earlier files - .env.production # Overrides all earlier files
-
Cache item counting: Fixed cache
Storemethod incorrectly incrementing item counter when updating existing keys, andInvalidatemethod decrementing counter for non-existent keys. (#1411)