github redpanda-data/redpanda v23.1.14

latest releases: v24.2.18-rc1, v23.3.21, v24.2.5...
14 months ago

Bug Fixes

  • Avoids a crash when attempting to create a read replica topic while cloud storage is not configured. by @andrwng in #11808
  • Fix for OffsetForLeaderEpoch returning a value for when the requestedEpoch is larger then the highest known. by @graphcareful in #12086
  • Fix for OffsetForLeaderEpoch returning the current leaders epoch (term) instead of the requested. by @graphcareful in #12086
  • Fixed a potential invalid memory access when iterating through segments with timestamps in the future. by @andrwng in #11969
  • Redpanda will now report upload housekeping metrics. by @andrwng in #11955
  • #11542 fixes rare situation in which consumer may stuck due to incorrect truncation point by @mmaslankaprv in #11543
  • #11604 net: Fix a rare crash during shutdown of a failed connection with outstanding requests by @BenPope in #11605
  • #11717 Memory consumption for housekeeping on compacted topics is reduced by @jcsp in #11718
  • #12014 Make tiered-storage metadata handling more strict during rolling upgrades by @Lazin in #12055
  • #12415 #12417 schema_registry: Strip redundant namespaces in Avro to improve schema lookup by @BenPope in #12425
  • rpk cluster logdirs no longer panics if there is an error getting a response from Redpanda by @twmb in #11926
  • rpk group offset-delete no longer tries to delete offsets for all topics if no empty topics are specified by @twmb in #11926
  • rpk group offset-delete no longer tries to delete offsets for empty-name topics by @twmb in #11926
  • fixes log segments being evicted too early by @mmaslankaprv in #12190

Improvements

  • Lower the TCP keepalive timeout to reap dead/idle connections faster and claim back resources by @StephanDollberg in #11774
  • Reduced latency impact from storing and retrieving metadata in certain scenarios where the number of partitions per shard is high by @BenPope in #11927
  • Reduced latency impact from storing metadata in certain scenarios where the number of partitions per shard is high by @BenPope in #11798
  • #11262 #11302 If kafka_max_bytes_per_fetch is not configured properly, redpanda is now more robust against huge Fetch requests by controlling memory consumption dynamically. by @dlex in #11858
  • #11262 #11302 Memory control is improved in Fetch request handler, covering the cases when a client tries to fetch too many partitions lead by the same shard. Some of the request partitions will not be fetched if the broker does not have enough memory for that, or if that would violate constraints set by kafka_max_bytes_per_fetch and fetch_max_bytes. by @dlex in #11858
  • #11643 Improved efficiency in encoding tag values in kafka wire protocol by @michael-redpanda in #11645
  • #11853 Avoid large allocation when storing metadata in certain scenarios where the number of partitions per shard is high by @BenPope in #11854
  • #11924 admin_server/get_partition: Avoid oversized allocation by @BenPope in #11928
  • #12034 Disable use of fetch scheduling group by default. We found a few cases that were negatively affected. by @StephanDollberg in #12035
  • #12051 Redpanda will now gracefully handle badly formatted SCRAM authentication messages by @michael-redpanda in #12052
  • #12106 Avoid large allocations with zstd compacted topics. by @BenPope in #12107
  • #12409 schema_registry: Improve the ordering of protobuf files by @BenPope in #12411
  • rpk: support SASL flags in rpk redpanda admin commands. by @r-vasquez in #11650

Full Changelog: v23.1.13...v23.1.14

Don't miss a new redpanda release

NewReleases is sending notifications on new releases.