pypi ddtrace 3.17.0rc1

one day ago

Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.

Upgrade Notes

  • LLM Observability: Experiments can now be created to be stored under a different project from the project defined in LLMObs.enable

Deprecation Notes

  • LLM Observability: LLMObs.submit_evaluation_for() has been deprecated and will be removed in a future version. It will be replaced with LLMObs.submit_evaluation() which will take the signature of the original LLMObs.submit_evaluation_for() method in ddtrace version 4.0. Please use LLMObs.submit_evaluation() for submitting evaluations moving forward.
    To migrate:
    • LLMObs.submit_evaluation_for(...) users: rename to LLMObs.submit_evaluation(...)
    • LLMObs.submit_evaluation_for(...) users: rename the span_context argument to span, i.e. LLMObs.submit_evaluation(span_context={"span_id": ..., "trace_id": ...}, ...) to LLMObs.submit_evaluation(span={"span_id": ..., "trace_id": ...}, ...)
  • tracing: Tracer.on_start_span and Tracer.deregister_on_start_span are deprecated and will be removed in v4.0.0 with no planned replacement.
  • Support for ddtrace with Python 3.8 is deprecated and will be removed in version 4.0.0.

New Features

  • CI Visibility: This introduces Test Impact Analysis code coverage support for Python 3.13.
  • azure_eventhubs: Add support for Azure Event Hubs producers.
  • azure_functions: Add support for Event Hubs triggers.
  • LLM Observability
    • Introduces automatic tracing context propagation for LLM Observability traces involving asynchronous tasks created via asyncio.create_task().
    • The asyncio and futures integrations are now enabled by default on LLMObs.enable(), which enables asynchronous context propagation for those libraries.
    • The LLMObs.submit_evaluation() and LLMObs.submit_evaluation_for() methods now accept a reasoning argument to denote an explanation of the evaluation results.
    • The OpenAI integration now submits LLM spans to LLM Observability for parse() methods used for structured outputs.
    • The LLMObs.submit_evaluation_for() method now accepts a assessment argument to denote
      whether or not the evaluation is valid or correct. Accepted values are either "pass" or "fail".
  • openai: Adds support for tracing the parse() methods for structured outputs on chat.completions and responses endpoints (available in OpenAI SDK >= 1.92.0).
  • AAP
    • This introduces track_user_id in the ATO SDK, which is equivalent to track_user but does not require the login, only the user id.
    • This introduces supports for custom scanners for data classification.

Bug Fixes

  • AAP
    • This fix resolves an issue where downstream request analysis would not match headers in rules when using requests with urllib3\<2.
    • This PR is a tentative fix for rare memory problems with libddwaf that we were unable to reproduce for now.
  • Pin to wrapt<2 until we can ensure full compatibility with the breaking changes.
  • CI Visibility
    • This fix resolves an issue where tests would be incorrectly detected as third-party code if a third-party package containing a folder with the same name as the tests folder was installed. For instance, the sumy package installs files under tests/* in site-packages, and this would cause any modules under tests.* to be considered third-party.
    • This fix resolves an issue with our coverage implementation for Python versions 3.12+ that affects generated bytecode that isn't mapped to a line in the code
  • LLM Observability: Resolves an issue with the Google GenAI integration where processing token metrics would sometimes be skipped if the LLM message had no text part.
  • grpc: This fix resolves an issue where the internal span was left active in the caller when using the future interface.
  • Profiling: prevent potential deadlocks with thread pools.
  • ray
    • This fix resolves an issue where submitting Ray jobs caused an AttributeError crash in certain configurations.
    • This fix resolves an issue where long-running job spans could remain unfinished when an exception occurred during job submission.
    • This fix resolves an issue where long-running spans did not preserve the correct resource name when being recreated.
  • otel: Ensures the /v1/logs path is correctly added to prevent log payloads from being dropped by the Agent when using OTEL_EXPORTER_OTLP_ENDPOINT configuration. Metrics and traces are unaffected.

Don't miss a new ddtrace release

NewReleases is sending notifications on new releases.