Estimated end-of-life date, accurate to within three months: 05-2027
See the support level definitions for more information.
New Features
-
mlflow
- Adds a request header provider (auth plugin) for MLFlow. If the environment variables
DD_API_KEY,DD_APP_KEYandDD_MODEL_LAB_ENABLEDare set, HTTP requests to the MLFlow tracking server will include theDD-API-KEYandDD-APPLICATION-KEYheaders. #16685
- Adds a request header provider (auth plugin) for MLFlow. If the environment variables
-
AI Guard
- Calls to evaluate now block if blocking was enabled for the service in the AI Guard UI. This behavior can be disabled by passing the parameter
block=False, which now defaults toblock=True. - This updates the AI Guard API client to return Sensitive Data Scanner (SDS) results in the SDK response.
- This introduces AI Guard support for Strands Agents. The Plugin API requires
strands-agents>=1.29.0; the HookProvider works with any version that exposes the hooks system.
- Calls to evaluate now block if blocking was enabled for the service in the AI Guard UI. This behavior can be disabled by passing the parameter
-
azure_durable_functions
- Add tracing support for Azure Durable Functions. This integration traces durable activity and entity functions.
-
profiling
- This adds process tags to profiler payloads. To deactivate this feature, set
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
- This adds process tags to profiler payloads. To deactivate this feature, set
-
runtime metrics
- This adds process tags to runtime metrics tags. To deactivate this feature, set
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
- This adds process tags to runtime metrics tags. To deactivate this feature, set
-
remote configuration
- This adds process tags to remote configuration payloads. To deactivate this feature, set
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
- This adds process tags to remote configuration payloads. To deactivate this feature, set
-
dynamic instrumentation
- This adds process tags to debugger payloads. To deactivate this feature, set
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
- This adds process tags to debugger payloads. To deactivate this feature, set
-
crashtracking
- This adds process tags to crash tracking payloads. To deactivate this feature, set
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
- This adds process tags to crash tracking payloads. To deactivate this feature, set
-
data streams monitoring
- This adds process tags to Data Streams Monitoring payloads. To deactivate this feature, set
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
- This adds process tags to Data Streams Monitoring payloads. To deactivate this feature, set
-
database monitoring
- This adds process tags to Database Monitoring SQL service hash propagation. To deactivate this feature, set
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
- This adds process tags to Database Monitoring SQL service hash propagation. To deactivate this feature, set
-
Stats computation
- This adds process tags to stats computation payloads. To deactivate this feature, set
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
- This adds process tags to stats computation payloads. To deactivate this feature, set
-
LLM Observability
- Experiment tasks can now optionally receive dataset record metadata as a third
metadataparameter. Tasks with the existing(input_data, config)signature continue to work unchanged. - This introduces
RemoteEvaluatorwhich allows users to reference LLM-as-Judge evaluations configured in the Datadog UI by name when running local experiments. For more information, see the documentation: https://docs.datadoghq.com/llm_observability/guide/evaluation_developer_guide/#using-managed-evaluators - This adds cache creation breakdown metrics for the Anthropic integration. When making Anthropic calls with prompt caching,
ephemeral_5m_input_tokensandephemeral_1h_input_tokensmetrics are now reported, distinguishing between 5 minute and 1 hour prompt caches. - Adds support for reasoning and extended thinking content in Anthropic, LiteLLM, and OpenAI-compatible integrations. Anthropic thinking blocks (
type: "thinking") are now captured asrole: "reasoning"messages in both streaming and non-streaming responses, as well as in input messages for tool use continuations. LiteLLM now extractsreasoning_output_tokensfromcompletion_tokens_detailsand capturesreasoning_contentin output messages for OpenAI-compatible providers.
- Experiment tasks can now optionally receive dataset record metadata as a third
-
tracer
- This introduces API endpoint discovery support for Tornado applications. HTTP endpoints are now automatically collected at application startup and reported via telemetry, bringing Tornado in line with Flask, FastAPI, and Django.
- This adds process tags to trace payloads. To deactivate this feature, set
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
Bug Fixes
- CI Visibility
- Fixes an issue where pytest plugins
pytest-rerunfailuresandflakywere silently overridden by the ddtrace plugin. With this change, external rerun plugins will now drive retries as expected when Auto Test Retries and Early Flake Detection features are both disabled, otherwise our retry mechanism takes precedence and a warning is emitted. - pytest:
- Fixed missing ITR tags in the new pytest plugin that caused time saved by Test Impact Analysis to not appear in dashboards.
- Fixes an issue where pytest plugins
- tracing
- Resolves an issue where a
RuntimeErrorcould be raised when iterating over thecontext._metadictionary while creating spans or generating distributed traces. - fixes an issue where telemetry debug mode was incorrectly enabled by
DD_TRACE_DEBUGinstead of its own dedicated environment variableDD_INTERNAL_TELEMETRY_DEBUG_ENABLED. SettingDD_TRACE_DEBUG=trueno longer enables telemetry debug mode. To enable telemetry debug mode, setDD_INTERNAL_TELEMETRY_DEBUG_ENABLED=true.
- Resolves an issue where a
- LLM Observability
- This fix resolves an issue where
cache_creation_input_tokensandcache_read_input_tokenswere not captured when using the LiteLLM integration with providers that support prompt caching (e.g., Anthropic, OpenAI, Deepseek).
- This fix resolves an issue where
- profiling
- Fixes an issue where enabling the profiler with gevent workers caused gunicorn to skip graceful shutdown, killing in-flight requests immediately on
SIGTERMinstead of honoring--graceful-timeout. #16424 - Fix potential reentrant crashes in the memory profiler by avoiding object allocations and frees during stack unwinding inside the allocator hook. #16661
- the Profiler now correctly flushes profiles at most once per upload interval.
- Fixes an
AttributeErrorcrash that occurred when the lock profiler or stack profiler encountered_DummyThreadinstances._DummyThreadlacks the_native_idattribute, so accessingnative_idraisesAttributeError. The profiler now falls back to using the thread identifier whennative_idis unavailable. - Lock acquire samples are now only recorded if the
acquirecall was successful.
- Fixes an issue where enabling the profiler with gevent workers caused gunicorn to skip graceful shutdown, killing in-flight requests immediately on
- Fix for potential crashes at process shutdown due to incorrect detection of the VM finalization state when stopping periodic worker threads.