github DataDog/dd-trace-py v4.8.0rc5
4.8.0rc5

pre-release9 hours ago

Estimated end-of-life date, accurate to within three months: 05-2027
See the support level definitions for more information.

Upgrade Notes

  • claude_agent_sdk: Tool span resource names have changed from the tool name (e.g. Read, Bash) to claude_agent_sdk.tool. The specific tool name is still available in the span name (e.g. claude_agent_sdk.tool.Read). Users relying on tool resource names should update them accordingly.

  • ray: Adds DD_TRACE_RAY_SUBMISSION_SPANS_ENABLED (default: False) configuration to control Ray submission tracing. Set DD_TRACE_RAY_SUBMISSION_SPANS_ENABLED=true to trace task.submit and actor_method.submit spans. Leave it unset to trace only execution spans. See Ray integration documentation for more details.

  • ray: ray.job.submit spans are removed. Ray job submission outcome is now reported on the existing ray.job span through ray.job.submit_status.

Deprecation Notes

  • Tracing: DD_TRACE_INFERRED_PROXY_SERVICES_ENABLED is deprecated and will be removed in 5.0.0. Use DD_TRACE_INFERRED_SPANS_ENABLED instead. The old environment variable continues to work but emits a DDTraceDeprecationWarning when set.

  • tracing: The pin parameter in ddtrace.contrib.dbapi.TracedConnection, ddtrace.contrib.dbapi.TracedCursor, and ddtrace.contrib.dbapi_async.TracedAsyncConnection is deprecated and will be removed in version 5.0.0. To manage configuration of DB tracing please use integration configuration and environment variables.

  • LLM Observability: Removes support for the RAGAS integration. As an alternative, if you have RAGAS evaluations, you can manually submit these evaluation results. See LLM Observability external evaluation documentation for more information.

New Features

  • AI Guard: Add DD_AI_GUARD_BLOCK environment variable. Defaults to True, which means the blocking behavior configured in the Datadog AI Guard UI (in-app) will be honored. Set to False to force monitor-only mode locally: evaluations are still performed but AIGuardAbortError is never raised, regardless of the in-app blocking setting.

  • AI Guard response objects now include a dict field tag_probs with the probabilities for each tag.

  • CI Visibility: Adds Bazel offline execution support with two modes: manifest mode (DD_TEST_OPTIMIZATION_MANIFEST_FILE), which reads settings and test data from pre-fetched cache files without network access; and payload-files mode (DD_TEST_OPTIMIZATION_PAYLOADS_IN_FILES), which writes test event, coverage, and telemetry payloads as JSON files instead of sending HTTP requests. Both modes can be used independently or together.

  • LLM Observability: Captures individual LLM spans for each Claude model turn within a Claude Agent SDK session. Each LLM span captures the input messages, output messages, model name, and token usage metrics (for claude_agent_sdk >= 0.1.49).

  • AAP: This adds Application Security support for FastAPI and Starlette applications using mounted sub-applications (via app.mount()). WAF evaluation, path parameter extraction, API endpoint discovery, and http.route reporting now correctly account for mount prefixes in sub-application routing.

  • google_cloud_pubsub: This adds tracing for Google Cloud Pub/Sub admin operations on topic, subscription, snapshot, and schema management methods.

  • google_cloud_pubsub: Adds support for Google Cloud Pub/Sub push subscriptions. When a push subscription delivers a message via HTTP, the integration now creates an inferred gcp.pubsub.receive span that captures subscription and message metadata. Use DD_GOOGLE_CLOUD_PUBSUB_PROPAGATION_AS_SPAN_LINKS to control whether the inferred span becomes a child of the producer trace or starts a new trace with the producer context attached as a span link (default: False).

  • LLM Observability: Add ExperimentRun.as_dataframe() to convert experiment run results into a pandas.DataFrame with a two-level MultiIndex on columns. Each top-level group (input, output, expected_output, evaluations, metadata, error, span_id, trace_id) maps to the first index level. Dict-valued fields are flattened one level deep; scalar fields use an empty string as the sub-column name. Each evaluator gets its own column containing the full evaluation result dict. Requires pandas to be installed (pip install pandas).

  • LLM Observability: Adds an eval_scope parameter to LLMObs.submit_evaluation() (one of "span" (default) or "trace"). Use eval_scope="trace" to associate an evaluation with an entire trace by passing the root span context.

  • LLM Observability: Adds LLMObs.get_spans() to retrieve span events from the Datadog platform API (GET /api/v2/llm-obs/v1/spans/events). Supports filtering by trace ID, span ID, span kind, span name, ML app, tags, and time range. Results are auto-paginated. Requires DD_API_KEY and DD_APP_KEY.

  • profiling: Profiles generated from fork-based servers now include a process_type tag with the value main or worker.

  • tracing: Support for making the default span name for @tracer.wrap include the class name has been added. For now, this is opt-in and can be enabled by setting DD_TRACE_WRAP_SPAN_NAME_INCLUDE_CLASS=true. The new naming will become the default in the next major release.

  • llmobs: Adds support for enabling and disabling LLMObs via Remote Configuration.

  • mysql: This introduces tracing support for mysql.connector.aio.connect in the MySQL integration.

  • profiling: Thread sub-sampling is now supported. This allows to set a maximum number of threads to capture stacks for at each sampling interval. This can be used to reduce the CPU overhead of the Stack Profiler.

  • llama_index: Adds APM tracing and LLM Observability support for llama-index-core>=0.11.0. Traces LLM calls, query engines, retrievers, embeddings, and agents. See the llama_index documentation for more information.

  • ASM: Adds a LiteLLM proxy guardrail integration for Datadog AI Guard. The ddtrace.appsec.ai_guard.integrations.litellm.DatadogAIGuardGuardrail class can be registered as a custom guardrail in the LiteLLM proxy to evaluate requests and responses against AI Guard security policies. Requires the LiteLLM proxy guardrails API v2 available since litellm>=1.46.1.

  • azure_cosmos: Add tracing support for Azure CosmosDB. This integration traces CRUD operations on CosmosDB databases, containers, and items.

  • LLM Observability: Introduces a decorator tag to LLM Observability spans that are traced by a function decorator.

  • CI Visibility: adds automatic log correlation and submission so that test logs appear alongside their corresponding test run in Datadog. Set DD_AGENTLESS_LOG_SUBMISSION_ENABLED=true for agentless setups, or DD_LOGS_INJECTION=true when using the Datadog Agent.

  • tracing: Adds support for exporting traces in OTLP HTTP/JSON format via libdatadog. Set OTEL_TRACES_EXPORTER=otlp to send spans to an OTLP endpoint instead of the Datadog Agent.

  • LLM Observability: Experiments accept a pydantic_evals ReportEvaluator as a summary evaluator when its evaluate return annotation is exactly ScalarResult. The scalar value is recorded as the summary evaluation. Report evaluators that declare a broader analysis return type (for example the full ReportAnalysis union) are not accepted as summary evaluators; use a class-based or function summary evaluator instead. Examples and further documentation can found in our documentation [here](https://docs.datadoghq.com/llm_observability/guide/evaluation_developer_guide).

    Example:

    from pydantic_evals.evaluators import ReportEvaluator
    from pydantic_evals.evaluators import ReportEvaluatorContext
    from pydantic_evals.reporting.analyses import ScalarResult
    
    from ddtrace.llmobs import LLMObs
    
    dataset = LLMObs.create_dataset(
        dataset_name="<DATASET_NAME>",
        description="<DATASET_DESCRIPTION>",
        records=[RECORD_1, RECORD_2, RECORD_3, ...]
    )
    
    class TotalCasesEvaluator(ReportEvaluator):
        def evaluate(self, ctx: ReportEvaluatorContext) -> ScalarResult:
            return ScalarResult(
                title='Total Cases',
                value=len(ctx.report.cases),
                unit='cases',
            )
    
    def my_task(input_data, config):
        return input_data["output"]
    
    equals_expected = EqualsExpected()
    summary_evaluator = TotalCasesEvaluator()
    
    experiment = LLMObs.experiment(
        name="<EXPERIMENT_NAME>",
        task=my_task, 
        dataset=dataset,
        evaluators=[equals_expected],
        summary_evaluators=[summary_evaluator],
        description="<EXPERIMENT_DESCRIPTION>."
    )
    
    result = experiment.run()
    

Bug Fixes

  • CI visibility: This fix resolves issues where CI provider metadata could omit pull request base branch and head commit details or report incorrect pull request values for some providers.

  • AAP: This fix resolves an issue where Application and API Protection (AAP) was incorrectly reported as an enabled product in internal telemetry for all services by default. Previously, registering remote configuration listeners caused AAP to be reported as activated even when it was not actually enabled. This had no impact on customers as it only affected internal telemetry data. AAP is now only reported as activated when it is explicitly enabled or enabled through remote configuration.

  • asgi: Fixed an issue caused network.client.ip and http.client_ip span tags being missing for FastAPI.

  • iast: A crash has been fixed.

  • lambda: Fixes a spurious Unable to create shared memory warning on every AWS Lambda cold start.

  • LLM Observability: Fixes an issue where an APM_TRACING remote configuration payload that did not include an llmobs section would disable LLM Observability on services where it had been enabled programmatically via LLMObs.enable(). Services that enabled LLM Observability via the DD_LLMOBS_ENABLED environment variable were unaffected. The handler now only changes LLM Observability state when the remote configuration payload explicitly carries an llmobs.enabled directive.

  • LLM Observability: Fixes a circular import in ddtrace.llmobs._writer when anthropic, openai, and botocore is installed.

  • Prevent potential crashes when the client library fails to restart a worker thread due to hitting a system resource limit.

  • internal: This fix resolves an issue where reading unknown attributes from ddtrace.internal.process_tags caused a KeyError instead of raising an AttributeError.

  • rq: Fixes compatibility with RQ 2.0. Replaces the removed Job.get_id() with the job.id property, and handles Job.get_status() now raising InvalidJobOperation for expired jobs (e.g. result_ttl=0) instead of returning None. #16682

  • tornado: Fixes an issue where routes inside a nested Tornado application were matched in reverse declaration order, causing a catch-all pattern to win over a more-specific route defined before it. This resulted in incorrect http.route tags on spans.

  • tornado: The http.route tag is now populated for routes whose regex cannot be reversed by Tornado (e.g. patterns containing non-capturing groups such as (?:a|b)). Capturing groups are still rendered as %s, consistent with Tornado's own route format, while non-capturing constructs are kept verbatim.

  • telemetry: This fix resolves an issue where unhandled exceptions raised by importlib.metadata during interpreter shutdown (for example, when Gunicorn workers exit uncleanly after a failed startup) caused update_imported_dependencies to surface errors through sys.excepthook. Failures while discovering dependencies for the app-dependencies-loaded telemetry payload are now logged at debug level and swallowed so they no longer propagate out of the dependency-reporting path.

  • profiling: Fixes noise caused by the profiler attempting to load its native module even when profiling was disabled,

  • profiling: A race condition which could make asyncio code raise exceptions at exit has been fixed.

  • remote_config: This fix resolves an issue where brief Datadog Agent connection errors could drop Remote Configuration polls, causing products such as Dynamic Instrumentation to temporarily appear disabled.

  • LLM Observability: Change the default model_provider and model_name to "unknown" from "custom" when a model did not match any known provider prefix in the Google GenAI, VertexAI, and Google ADK integrations.

  • LLM Observability: This fix resolves tracing issues for pydantic-ai >= 1.63.0 where tool spans and agent instructions were not being properly captured. This fix adds tracing to the ToolManager.execute_tool_call method for newer versions of the library to resolve this issue.

  • celery: remove unnecessary warning log about missing span when using Task.replace().

  • django: Fixes RuntimeError: coroutine ignored GeneratorExit that occurred under ASGI with async views and async middleware hooks on Python 3.13+. Async view methods and middleware hooks are now correctly detected and awaited instead of being wrapped with sync bytecode wrappers.

  • Code Security (IAST): Fixes a missing return in the IAST taint tracking add_aspect native function that caused redundant work when only the right operand of a string concatenation was tainted.

  • openai: Fixes async streaming spans never being finished when using AsyncAPIResponse (e.g. responses.create(stream=True)). The sync handle_request hook called resp.parse() without awaiting the coroutine, preventing the stream from being wrapped in TracedAsyncStream. This caused disconnected LLM Observability traces for streamed sub-agent calls via the OpenAI Agents SDK.

  • Fixed a race condition with internal periodic threads that could have caused a rare crash when forking.

  • ray: This fix resolves an issue where Ray integration spans could use an incorrect service name when the Ray job name was set after instrumentation initialization.

  • tracing: Fixes the svc.auto process tag attribution logic. The tag now correctly reflects the auto-detected service name derived from the script or module entrypoint, matching the service name the tracer would assign to spans.

  • Fixes an issue where internal background threads could cause crashes or instability in applications that fork (e.g. Gunicorn, uWSGI) or during Python shutdown. Affected applications could experience intermittent crashes or hangs on exit.

  • tracing: This fix resolves an issue where applications started with python -m <module> could report entrypoint.name as -m in process tags.

  • apm: Fixed an issue where network.client.ip and http.client_ip span tags were missing when client IP collection was enabled and request had no headers.

  • litellm: Fix missing LLMObs spans when routing requests through a litellm proxy. Proxy requests were incorrectly suppressed and resulted in empty or missing LLMObs spans. Proxy requests for OpenAI models are now always handled by the litellm integration.

  • profiling: A rare crash occurring when profiling asyncio code with many tasks or deep call stacks has been fixed.

  • serverless: AWS Lambda functions now appear under their function name as the service when DD_SERVICE is not explicitly configured. Service remapping rules configured in Datadog will now apply correctly to Lambda spans.

  • LLM Observability: Fixes an issue where deeply nested tool schemas in Anthropic and OpenAI integrations were not yet supported. The Anthropic and OpenAI integrations now check each tool's schema depth at extraction time. If a tool's schema exceeds the maximum allowed depth, the schema is truncated.

  • Code Security (IAST): This fix resolves a thread-safety issue in the IAST taint tracking context that could cause vulnerability detection to silently stop working under high concurrency in multi-threaded applications.

  • internal: A crash has been fixed.

  • CI Visibility: This fix resolves an issue where a failure response from the /search_commits endpoint caused the git metadata upload to fall back to sending the full 30-day commit history instead of aborting. This fallback could trigger cascading write load on the backend. The upload now aborts when search_commits fails, matching the behavior when the /packfile upload itself fails.

  • LLM Observability: Fixes multimodal OpenAI chat completion inputs being rendered as raw iterable objects in LLM Observability traces. Multimodal content parts (text, image, audio) are now properly materialized and formatted as readable text.

  • profiling: A rare crash that could occur post-fork in fork-based applications has been fixed.

  • profiling: A bug in Lock Profiling that could cause crashes when trying to access attributes of custom Lock subclasses (e.g. in Ray) has been fixed.

  • CI Visibility: This fix resolves an issue where pytest-xdist worker crashes (os._exit, SIGKILL, segfault) caused buffered test events to be lost. To enable eager flushing, set DD_TRACE_PARTIAL_FLUSH_MIN_SPANS=1.

  • profiling: Fixes lock profiling samples not appearing in the Thread Timeline view for events collected on macOS.

  • internal: Fix a potential internal thread leak in fork-heavy applications.

  • internal: This fix resolves an issue where a ModuleNotFoundError could be raised at startup in Python environments without the _ctypes extension module.

  • internal: A crash that could occur post-fork in fork-heavy applications has been fixed.

  • LLM Observability: Fixes incorrect span hierarchy in LLMObs traces when using the ddtrace SDK alongside OTel-based instrumentation (e.g. Strands Agents). OTel gen_ai spans (e.g. invoke_agent) were incorrectly appearing as siblings of their SDK parent span (e.g. call_agent) rather than being nested under it.

  • LLM Observability: Fixes model_name and model_provider reported on AWS Bedrock LLM spans as the model_id full model identifier value (e.g., "amazon.nova-lite-v1:0") and "amazon_bedrock" respectively. Bedrock spans' model_name and model_provider` now correctly match backend pricing data, which enables features including cost tracking.

  • LLM Observability: Fixes an issue where deferred tools (defer_loading=True) in Anthropic and OpenAI integrations caused LLMObs span payloads to include full tool descriptions and JSON schemas for every tool in a large catalog. Deferred tool definitions now have their description and schema stripped from span metadata, with only the tool name preserved.

Other Changes

  • remote config: Removes noisy warning log that was being emitted when an unsupported agent config payload was received.

  • ASM: Update default security rules to 1.18.0. Notably, this adds business logic event coverage for Stripe auto-instrumentation and expands WAF rule coverage (ZipSlip detection, file upload with double extension, broader header scanning, and expanded XXE detection).

Don't miss a new dd-trace-py release

NewReleases is sending notifications on new releases.