This release contains performance improvements and bug fixes since the 2.25.2 release. We recommend that you upgrade at the next available opportunity.
Highlighted features in TimescaleDB v2.26.0
- The vectorized aggregation engine now evaluates PostgreSQL functions directly on columnar arguments and stores the results in a columnar format to preserve the high-speed execution pipeline. For analytical queries that leverage functions like
time_bucket()in grouping or aggregation expressions, the function is evaluated natively without falling back to standard row-based processing. This enhancement ensures that the remainder of the query can seamlessly continue using the highly efficient columnar pipeline, yielding performance improvements of 3.5 times faster. - The query execution engine now supports composite bloom filters for
SELECTandUPSERToperations, pushing down multi-column predicates directly to compressed table scans. This optimization bypasses costly batch decompression by automatically selecting the most restrictive bloom filter to quickly verify if target values are present. Showing over two times faster query performance when a composite bloom filter is used. Additionally, query profiling now includes detailedEXPLAINstatistics to monitor batch pruning and false-positive rates. - The custom node
ColumnarIndexScanadjusts the query plan to fetch values from the sparse minmax indexes, improving query performance on the columnstore by up to 70x. For analytical queries that leverage functions likeCOUNT,MIN,MAX,FIRST(limited), andLAST(limited), the sparse index is being read instead of decompressing the batch.
Features
- #9104 Support
min(text),max(text)for C collation in columnar aggregation pipeline - #9117 Support functions like
time_bucketin the columnar aggregation and grouping pipeline. - #9142 Remove column
droppedfrom _timescaledb_catalog.chunk - #9238 Support non-partial aggregates with vectorized aggregation
- #9253 Support
VectorAggin subqueries and CTEs - #9266 Add support for
HAVINGto vectorized aggregation - #9267 Enable
ColumnarIndexScancustom scan - #9312 Remove advisory locks from bgw jobs and add graceful cancellation
- #8983 Add GUC for default chunk time interval
- #9334 Fix out-of-range timestamp error in WHERE clauses
- #9368 Enable runtime chunk exclusion on inner side of nested loop join
- #9372 Push down composite bloom filter checks to
SELECTexecution - #9374 Use bloom filters to eliminate decompression of unrelated compressed batches during
UPSERTstatements - #9382 Fix chunk creation failure after replica identity invalidation
- #9398 Fix chunk exclusion for
IN/ANYon open (time) dimensions
Bugfixes
- #9401 Fix forced refresh not consuming invalidations
- #7629 Forbid non-constant timezone parameter in
time_bucket_gapfill - #9344 Wrong result or crash on cross-type comparison of partitioning column
- #9356 Potential crash when using a hypertable with partial compression or space partitioning in a nested loop join
- #9376 Allow
CREATE EXTENSIONafter drop in the same session - #9378 Fix foreign key constraint failure when inserting into hypertable with referencing a foreign key
- #9381 Data loss with direct compress with client-ordered data in an
INSERT SELECTfrom a compressed hypertable - #9413 Fix incorrect decompress markers on full batch delete
- #9414 Fix
NULLcompression handling inestimate_uncompressed_size - #9417 Fix segfault in
bloom1_contains
GUCs
default_chunk_time_interval: Default chunk time interval for new hypertables. This is an expert configuration, please do not alter unless recommended from Tiger Data.enable_composite_bloom_indexes: Enable creation of bloom composite indexes on compressed chunks. Default:true
Thanks
- @bronzinni for reporting an issue with foreign keys on hypertables
- @janpio for reporting an issue with CREATE EXTENSION after dropping and recreating schema
- @leppaott for reporting a deadlock when deleting jobs