github confluentinc/librdkafka v2.2.0

latest releases: v2.6.1-RC1, v2.6.0, v2.6.0-RC2...
16 months ago

librdkafka v2.2.0 is a feature release:

  • Fix a segmentation fault when subscribing to non-existent topics and
    using the consume batch functions (#4273).
  • Store offset commit metadata in rd_kafka_offsets_store (@mathispesch, #4084).
  • Fix a bug that happens when skipping tags, causing buffer underflow in
    MetadataResponse (#4278).
  • Fix a bug where topic leader is not refreshed in the same metadata call even if the leader is
    present.
  • KIP-881:
    Add support for rack-aware partition assignment for consumers
    (#4184, #4291, #4252).
  • Fix several bugs with sticky assignor in case of partition ownership
    changing between members of the consumer group (#4252).
  • KIP-368:
    Allow SASL Connections to Periodically Re-Authenticate
    (#4301, started by @vctoriawu).
  • Avoid treating an OpenSSL error as a permanent error and treat unclean SSL
    closes as normal ones (#4294).
  • Added fetch.queue.backoff.ms to the consumer to control how long
    the consumer backs off next fetch attempt. (@bitemyapp, @edenhill, #2879)
  • KIP-235:
    Add DNS alias support for secured connection (#4292).
  • KIP-339:
    IncrementalAlterConfigs API (started by @PrasanthV454, #4110).
  • KIP-554: Add Broker-side SCRAM Config API (#4241).

Enhancements

  • Added fetch.queue.backoff.ms to the consumer to control how long
    the consumer backs off next fetch attempt. When the pre-fetch queue
    has exceeded its queuing thresholds: queued.min.messages and
    queued.max.messages.kbytes it backs off for 1 seconds.
    If those parameters have to be set too high to hold 1 s of data,
    this new parameter allows to back off the fetch earlier, reducing memory
    requirements.

Fixes

General fixes

  • Fix a bug that happens when skipping tags, causing buffer underflow in
    MetadataResponse. This is triggered since RPC version 9 (v2.1.0),
    when using Confluent Platform, only when racks are set,
    observers are activated and there is more than one partition.
    Fixed by skipping the correct amount of bytes when tags are received.
  • Avoid treating an OpenSSL error as a permanent error and treat unclean SSL
    closes as normal ones. When SSL connections are closed without close_notify,
    in OpenSSL 3.x a new type of error is set and it was interpreted as permanent
    in librdkafka. It can cause a different issue depending on the RPC.
    If received when waiting for OffsetForLeaderEpoch response, it triggers
    an offset reset following the configured policy.
    Solved by treating SSL errors as transport errors and
    by setting an OpenSSL flag that allows to treat unclean SSL closes as normal
    ones. These types of errors can happen it the other side doesn't support close_notify or if there's a TCP connection reset.

Consumer fixes

  • In case of multiple owners of a partition with different generations, the
    sticky assignor would pick the earliest (lowest generation) member as the
    current owner, which would lead to stickiness violations. Fixed by
    choosing the latest (highest generation) member.
  • In case where the same partition is owned by two members with the same
    generation, it indicates an issue. The sticky assignor had some code to
    handle this, but it was non-functional, and did not have parity with the
    Java assignor. Fixed by invalidating any such partition from the current
    assignment completely.

Checksums

Release asset checksums:

  • v2.2.0.zip SHA256 e9a99476dd326089ce986afd3a5b069ef8b93dbb845bc5157b3d94894de53567
  • v2.2.0.tar.gz SHA256 af9a820cbecbc64115629471df7c7cecd40403b6c34bfdbb9223152677a47226

Don't miss a new librdkafka release

NewReleases is sending notifications on new releases.