github dotnet/extensions v10.4.0

latest release: v10.4.1
9 days ago

This release advances the AI abstractions with new hosted file, web search, and reasoning content types, stabilizes MCP and tool approval APIs, adds streaming latency metrics to OpenTelemetry instrumentation, and delivers bug fixes across caching, data ingestion, and resource monitoring.

Experimental API Changes

Now Stable

  • MCP Server Tool Content and Function Call Approval APIs are now stable (previously MEAI001) #7299
  • FakeLogCollector.GetLogsAsync(CancellationToken) is now stable (previously EXTEXP0003) #7332

New Experimental APIs

  • New experimental AddExtendedHttpClientLogging overloads with wrapHandlersPipeline parameter (EXTEXP0013) #7231

Removed Experimental APIs

  • AI Tool Reduction experimental APIs removed (was experimental under MEAI001) #7353

What's Changed

AI

  • Add IHostedFileClient and friends #7269 by @stephentoub
  • Add web search tool call content #7276 by @stephentoub (co-authored by @Copilot)
  • Surface OpenAI-compatible reasoning_content as TextReasoningContent #7295 by @stephentoub
  • MCP/Approvals/Tool Contents stabilization #7299 by @jozkee
  • Implement time_to_first_chunk and time_per_output_chunk streaming metrics in OpenTelemetryChatClient #7325 by @stephentoub (co-authored by @Copilot)
  • Add openai.api.type telemetry attribute to OpenAI IChatClient implementations #7316 by @stephentoub (co-authored by @Copilot)
  • Update OpenTelemetry Gen AI semantic conventions to v1.40 #7322 by @stephentoub (co-authored by @Copilot)
  • Fix tool definitions emission regardless of sensitivity setting #7346 by @stephentoub (co-authored by @Copilot)
  • Honor [Required] attribute in AI function parameter JSON schema generation #7272 by @stephentoub (co-authored by @Copilot)
  • AddAIContentType automatically registers content type against every base in the inheritance chain up to AIContent #7358 by @jozkee (co-authored by @Copilot)
  • Auto-mark server-handled FunctionCallContent as InformationalOnly #7314 by @stephentoub (co-authored by @Copilot)
  • Map ReasoningEffort.None and ExtraHigh to none and xhigh in OpenAI IChatClient implementations #7319 by @stephentoub (co-authored by @Copilot)
  • Handle DynamicMethod reflection limitations in AIFunctionFactory #7287 by @stephentoub (co-authored by @Copilot)
  • Fix Activity.Current nulled during streaming tool invocation #7321 by @flaviocdc (co-authored by @Copilot)
  • Handle FunctionCallOutputResponseItem in streaming response conversion #7307 by @stephentoub (co-authored by @Copilot)
  • Fix serialization of response continuation tokens #7356 by @stephentoub
  • Remove AI Tool Reduction experimental APIs #7353 by @stephentoub (co-authored by @Copilot)
  • Update OpenAI to 2.9.1 #7349 by @stephentoub

Telemetry and Observability

  • Introduce support for the Gauge metric type #7203 by @rainsxng
  • Update logging source generator to support generic methods #7331 by @svick (co-authored by @Copilot)
  • Update logging source generator to match runtime PR #124589 (ref readonly/params/scoped) #7333 by @svick (co-authored by @Copilot)
  • Promote FakeLogCollector.GetLogsAsync(CancellationToken) from experimental to stable #7332 by @Demo30
  • Remove obsolete CS1591 warning suppression from generated file preamble #7308 by @luissena

HTTP Resilience and Diagnostics

  • Expose wrapHandlersPipeline parameter in AddExtendedHttpClientLogging API #7231 by @rainsxng (co-authored by @Copilot)

Diagnostics, Health Checks, and Resource Monitoring

Data Ingestion

  • Fix infinite loop in GetPreExistingChunksIdsAsync when records exceed MaxTopCount #7311 by @adamsitnik (co-authored by @Copilot)

Caching

Test Improvements

  • Fix flaky LinuxResourceHealthCheckTests by isolating MeterListener with ReferenceEquals #7302 by @stephentoub (co-authored by @Copilot)
  • Fix flaky resource monitoring test #7303 by @stephentoub
  • Fix flaky HttpRequestBuffering_DoesNotBufferDisabledOrOversizedLogs test #7304 by @stephentoub (co-authored by @Copilot)
  • Fix race condition in FakeLogCollector async enumeration test #7300 by @stephentoub (co-authored by @Copilot)
  • Fix cgroupv1 acceptance test to explicitly register the v1 parser #7296 by @stephentoub (co-authored by @Copilot)

Repository Infrastructure Updates

  • Update McpServer project template to ModelContextProtocol 1.1.0 #7338 by @jeffhandley (co-authored by @Copilot)
  • Update aiagent-webapi template to latest Agent Framework versions (rc1/260219) #7339 by @jeffhandley (co-authored by @Copilot)
  • Update SDK and dotnet version to 10.0.103 #7326 by @wtgodbe
  • Introduce an ApiChief skill to streamline updating API baselines #7281 by @jeffhandley (co-authored by @Copilot)
  • Update ApiChief script to use the net10.0 artifacts #7280 by @jeffhandley
  • Replace Windows queue image references from vs2022preview to vs2022 in pipeline YAML #7347 by @wtgodbe (co-authored by @Copilot)
  • Update public pipeline pool images to fix broken builds #7292 by @joperezr
  • Use smaller windows.vs2022.amd64.open pool image #7298 by @joperezr
  • Remove main-to-dev inter-branch merge automation #7315 by @joperezr (co-authored by @Copilot)
  • Updating FakeLogCollector API baselines #7334 by @Demo30
  • Run the issue-labeler over pull requests using polling #7273 by @jeffhandley (co-authored by @Copilot)
  • Add npmAuthenticate task to fix npm E401 errors on CI agents #7364 by @ilonatommy (co-authored by @Copilot)
  • Pass sourceIndexBuildCommand through to SourceIndex #7348 by @wtgodbe
  • Bump qs from 6.14.1 to 6.14.2 in /src/Libraries/Microsoft.Extensions.AI.Evaluation.Reporting/TypeScript #7301
  • Bump rollup from 4.40.0 to 4.59.0 #7345

Acknowledgements

Full Changelog: v10.3.0...v10.4.0

Don't miss a new extensions release

NewReleases is sending notifications on new releases.