This is a recommended release due to significant chunk retrieval performance improvements. AR.IO peer chunk fetching now uses parallel requests, reducing worst-case latency from ~150 seconds to ~4 seconds. Additional optimizations include tx_path Merkle proof validation to avoid expensive chain binary searches, offset-based chunk cache lookups via symlinks, and a static offset-to-block mapping that reduces block search iterations by ~29%.
Summary
- AR.IO Peer Chunk Retrieval Optimization: Parallel peer requests, worst-case latency reduced from ~150s to ~4s
- tx_path Chunk Validation: DB-first lookup with tx_path Merkle proof fallback
- Chunk Cache by Absolute Offset: Symlink-based O(1) cache lookups by weave offset
- Block Search Optimization: Static offset-to-block mapping reduces search iterations by ~29%
- Chunk POST Early Termination: Stop broadcasting after consecutive 4xx failures (~96% reduction in wasted requests)
- OTEL Nested Bundle Sampling Policies: Targeted tail-sampling to detect nested bundle offset issues
- Chunk Rebroadcasting: Optional async rebroadcasting of chunks from configured sources
See CHANGELOG.md for full details.
Docker Images
ghcr.io/ar-io/ar-io-core:755f68c6bc8309ab775a5c9ffd81586f75cbab30
ghcr.io/ar-io/ar-io-envoy:4755fa0a2deb258bfaeaa91ba3154f1f7ef41fda
ghcr.io/ar-io/ar-io-clickhouse-auto-import:4512361f3d6bdc0d8a44dd83eb796fd88804a384
ghcr.io/ar-io/ar-io-litestream:be121fc0ae24a9eb7cdb2b92d01f047039b5f5e8