github apollographql/router v1.60.1

latest releases: v1.61.0-rc.0, v2.0.0
7 days ago

🐛 Fixes

Header propagation rules passthrough (PR #6690)

Header propagation contains logic to prevent headers from being propagated more than once. This was broken
in #6281 which always considered a header propagated regardless if a rule
actually matched.

This PR alters the logic so that only when a header is populated then the header is marked as fixed.

The following will now work again:

headers:
  all:
    request:
      - propagate:
          named: a
          rename: b
      - propagate:
          named: b

Note that defaulting a head WILL populate a header, so make sure to include your defaults last in your propagation
rules.

headers:
  all:
    request:
      - propagate:
          named: a
          rename: b
          default: defaulted # This will prevent any further rule evaluation for header `b`
      - propagate:
          named: b

Instead, make sure that your headers are defaulted last:

headers:
  all:
    request:
      - propagate:
          named: a
          rename: b
      - propagate:
          named: b
          default: defaulted # OK

By @BrynCooke in #6690

Entity cache: fix directive conflicts in cache-control header (Issue #6441)

Unnecessary cache-control directives are created in cache-control header. The router will now filter out unnecessary values from the cache-control header when the request resolves. So if there's max-age=10, no-cache, must-revalidate, no-store, the expected value for the cache-control header would simply be no-store. Please see the MDN docs for justification of this reasoning: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#preventing_storing

By @bnjjj in #6543

Resolve regressions in fragment compression for certain operations (PR #6651)

In v1.58.0 we introduced a new compression strategy for subgraph GraphQL operations to replace an older, more complicated algorithm.

While we were able to validate improvements for a majority of cases, some regressions still surfaced. To address this, we are extending it to compress more operations with the following outcomes:

  • The P99 overhead of running the new compression algorithm on the largest operations in our corpus is now just 10ms
  • In case of better compression, at P99 it shrinks the operations by 50Kb when compared to the old algorithm
  • In case of worse compression, at P99 it only adds an additional 108 bytes compared to the old algorithm, which was an acceptable trade-off versus added complexity

By @dariuszkuc in #6651

Don't miss a new router release

NewReleases is sending notifications on new releases.