Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Upgrade Notes
- LLM Observability: Experiments can now be created to be stored under a different project from the project defined in
LLMObs.enable
Deprecation Notes
- LLM Observability:
LLMObs.submit_evaluation_for()
has been deprecated and will be removed in a future version. It will be replaced withLLMObs.submit_evaluation()
which will take the signature of the originalLLMObs.submit_evaluation_for()
method in ddtrace version 4.0. Please useLLMObs.submit_evaluation()
for submitting evaluations moving forward.
To migrate:LLMObs.submit_evaluation_for(...)
users: rename toLLMObs.submit_evaluation(...)
LLMObs.submit_evaluation_for(...)
users: rename thespan_context
argument tospan
, i.e.LLMObs.submit_evaluation(span_context={"span_id": ..., "trace_id": ...}, ...)
toLLMObs.submit_evaluation(span={"span_id": ..., "trace_id": ...}, ...)
- tracing:
Tracer.on_start_span
andTracer.deregister_on_start_span
are deprecated and will be removed in v4.0.0 with no planned replacement. - Support for ddtrace with Python 3.8 is deprecated and will be removed in version 4.0.0.
New Features
- CI Visibility: This introduces Test Impact Analysis code coverage support for Python 3.13.
- azure_eventhubs: Add support for Azure Event Hubs producers.
- azure_functions: Add support for Event Hubs triggers.
- LLM Observability
- Introduces automatic tracing context propagation for LLM Observability traces involving asynchronous tasks created via
asyncio.create_task()
. - The
asyncio
andfutures
integrations are now enabled by default onLLMObs.enable()
, which enables asynchronous context propagation for those libraries. - The
LLMObs.submit_evaluation()
andLLMObs.submit_evaluation_for()
methods now accept areasoning
argument to denote an explanation of the evaluation results. - The OpenAI integration now submits LLM spans to LLM Observability for
parse()
methods used for structured outputs. - The
LLMObs.submit_evaluation_for()
method now accepts aassessment
argument to denote
whether or not the evaluation is valid or correct. Accepted values are either"pass"
or"fail"
.
- Introduces automatic tracing context propagation for LLM Observability traces involving asynchronous tasks created via
- openai: Adds support for tracing the
parse()
methods for structured outputs onchat.completions
andresponses
endpoints (available in OpenAI SDK >= 1.92.0). - AAP
- This introduces
track_user_id
in the ATO SDK, which is equivalent totrack_user
but does not require the login, only the user id. - This introduces supports for custom scanners for data classification.
- This introduces
Bug Fixes
- AAP
- This fix resolves an issue where downstream request analysis would not match headers in rules when using
requests
withurllib3\<2
. - This PR is a tentative fix for rare memory problems with libddwaf that we were unable to reproduce for now.
- This fix resolves an issue where downstream request analysis would not match headers in rules when using
- Pin to
wrapt<2
until we can ensure full compatibility with the breaking changes. - CI Visibility
- This fix resolves an issue where tests would be incorrectly detected as third-party code if a third-party package containing a folder with the same name as the tests folder was installed. For instance, the
sumy
package installs files undertests/*
insite-packages
, and this would cause any modules undertests.*
to be considered third-party. - This fix resolves an issue with our coverage implementation for Python versions 3.12+ that affects generated bytecode that isn't mapped to a line in the code
- This fix resolves an issue where tests would be incorrectly detected as third-party code if a third-party package containing a folder with the same name as the tests folder was installed. For instance, the
- LLM Observability: Resolves an issue with the Google GenAI integration where processing token metrics would sometimes be skipped if the LLM message had no text part.
- grpc: This fix resolves an issue where the internal span was left active in the caller when using the future interface.
- Profiling: prevent potential deadlocks with thread pools.
- ray
- This fix resolves an issue where submitting Ray jobs caused an
AttributeError
crash in certain configurations. - This fix resolves an issue where long-running job spans could remain unfinished when an exception occurred during job submission.
- This fix resolves an issue where long-running spans did not preserve the correct resource name when being recreated.
- This fix resolves an issue where submitting Ray jobs caused an
- otel: Ensures the
/v1/logs
path is correctly added to prevent log payloads from being dropped by the Agent when usingOTEL_EXPORTER_OTLP_ENDPOINT
configuration. Metrics and traces are unaffected.