This release contains performance improvements and bug fixes since the 2.26.4 release. We recommend that you upgrade at the next available opportunity.
Highlighted features in TimescaleDB v2.27.0
- The Hypercore engine now supports a vectorized implementation of filters by evaluating them inline through the standard Postgres function path. This expands the set of queries (including continuous aggregate refreshes) that can take the faster path through the columnstore, yielding speedups ranging from 30% up to 2x in benchmarks.
UPDATEandDELETEstatements with equality predicates can now use bloom filters to skip decompressing batches whose compressed rows can't match. When multiple bloom filters apply, they are evaluated in decreasing order of column count (most selective first), and EXPLAIN now reports filtering activity via the new "Compressed batches filtered" and "Batches filtered after decompression counters". The query performance increases in some case up to 160 times.UPSERTqueries can now leverage bloom filters (including composite ones) to skip decompressing batches when the arbiter values are guaranteed not to be present, with the most-selective filter chosen automatically when multiple apply. EXPLAIN output adds new statistics — batches checked by bloom, batches pruned by bloom, batches without bloom, and bloom false positives — for visibility into pruning effectiveness.
Upcoming PostgreSQL 15 EOL announcement
As a reminder, the upcoming TimescaleDB release in June 2026 will officially be the last version with support for PostgreSQL 15. This deprecation was initially announced in the v2.23.0 changelog on October 29, 2025, to provide users ample time to prepare. To ensure uninterrupted access to new features, bugfixes and performance enhancements, all instances must be upgraded to PostgreSQL 16 or greater.
Backward-Incompatible Changes
- #9579 The bloom filter sparse indexes on compressed
int2columns could lead toSELECTqueries not returning the rows that actually match theWHEREcondition. The upgrade is blocked for the affected databases, and the incorrect indexes have to be dropped manually before the upgrade. - This release introduces a new naming convention for composite bloom filter metadata. While this change will not disrupt query processing, v2.27 cannot automatically utilize composite bloom filters generated in v2.26. To convert your existing v2.26 composite bloom filters, the legacy metadata columns must be renamed. This is a lightweight, catalog-only operation requiring zero data recompression, which can be done with this migration script.
Features
- #8868 Use
PG_MODULE_MAGIC_EXTforPG18 - #8967 Rewriting queries with continuous aggregates exactly matching query aggregation
- #9192 Push down scalar array operations into the columnar metadata scan by transforming them into an
OR/ANDclause. - #9355 Defer
segmentbydefault for direct compress - #9374 Use bloom filters to eliminate decompression of unrelated compressed batches during
UPSERTs. - #9396 Analyze and get
segmentbyduring direct compress - #9398 Fix chunk exclusion for
IN/ANYon open (time) dimensions - #9399 Use bloom filters to reduce decompression during
UPDATE/DELETEcommands. - #9403 Set default
segmentbyduring direct compress flush - #9437 Allow running compression as part of refresh policy for compressed continuous aggregates
- #9443 Enable vectorized aggregation in some cases when the
WHEREclause contains filters not handled through the "Vectorized Filters" facility. This includes e.g. filters ontime_bucket(). - #9458 Remove
_timescaledb_functions.repair_relation_acls - #9475 Calculate hashes for bloom filter predicates at planning time.
- #9504 Allow
ALTER TABLE RESETon materialization hypertables - #9521 Add support for reporting index creation progress
- #9559 Notice on compression settings change
- #9569 For nullable
orderbycolumns do segmentwise decompress-compress instead of segmentwise recompress. - #9583 Drop existing sparse indexes when dropping columns
- #9648 Support
ENABLE/DISABLE TRIGGERon hypertables - #9702 Allow Batch Sorted Merge for unordered chunks with no
segmentbyor when allsegmentbycolumns are pinned to aConst
Bugfixes
- #9363 Change compression job status when chunks could be compressed
- #9413 Fix incorrect decompress markers on full batch delete
- #9414 Fix
NULLcompression handling inestimate_uncompressed_size - #9417 Fix segfault in
bloom1_contains - #9479 Disallow sub-day offset for
time-bucketonDate - #9482 Forbid Batch Sort Merge on nullable
orderbycolumns - #9490 Disallow negative interval as
chunk_interval - #9500 Fix off-by-one error when building object name
- #9519 Remove self-referential
FOREIGN KEYconstraints from catalog - #9561 Simplify job history retention by replacing binary search and temp table
- #9590 Fix policy skipping uncompressed chunks
- #9596 Remove unused
process_hypertable_invalidationspolicy code - #9604 Remove dead
post_parse_analyze_hookcapture in loader - #9610 Fix use-after-free crash in
cache_destroyduring transaction abort - #9632 Preserve chunk settings during recompress
- #9640 Fix
NULLdatumCopycrash insegmentbyanalysis - #9680 Fix segfault in direct compress insert on hypertable with dropped column
- #9692 Fix internal "invalid perminfoindex 0 in RTE" error on
MERGE NOT MATCHED INSERTinto a hypertable - #9705 Avoid double
TOASTdelete whenDELETE-after-compressionis enabled - #9705 Only freeze compressed rows when truncating uncompressed chunk
- #9706 Use
bigintinestimate_uncompressed_sizecalculations - #9709 Reject mismatched element type in
bool/uuiddecompression - #9710 Return
bigintfromcompressed_data_column_size - #9711 Fix registration row leak when continuous aggregate refresh fails
- #9697 Improve
pathkeyhandling for compressed sub-paths during sort transformation - #9743 Fix the composite bloom metadata column naming scheme
- #9767 Skip dropped chunks when trying to remove
ts_cagg_invalidation_trigger - #9747 Reject inheriting from a hypertable
- #9744 Use a fixed call string for the telemetry job in
ts_stat_statementsrecording - #9736 Do logical sparse index comparison
- #9731 Avoid creating overlapping batches during recompression for multi orderby configurations
- #9717 Reject non-positive time bucket width on cagg creation
- #9707 Fix policy name comparison in remove_policies
New Settings
enable_cagg_rewrites: enables rewriting queries with CAggs. Off by default.cagg_rewrites_debug_info: prints CAgg rewrites diagnostics. Off by default.enable_columnar_scan_filter_pushdown: enables pushing the filters on columnar scan down to the compressed scan level. On by default.
Thanks
- @fabriziomello for adding support for
PG_MODULE_MAGIC_EXT - @maltalex for reporting an issue with index creation progress reporting
- @pavanmanishd for the first version of the fix for #9743
- @h0rn3t for reporting issue with recompression creating overlapping batches