⚡ Performance Release
Two-layer caching architecture: in-memory packet store + TTL response cache with stale-while-revalidate. All packet reads served from RAM — SQLite is write-only. Heavy endpoints pre-warmed on startup.
Benchmark Results (28K packets, ARM64)
| Endpoint | v2.0.1 | v2.1.0 | Speedup |
|---|---|---|---|
| Bulk Health | 7,059 ms | 1 ms | 7,059× |
| Node Analytics | 381 ms | 1 ms | 381× |
| Hash Sizes | 353 ms | 1 ms | 353× |
| Topology | 685 ms | 2 ms | 342× |
| RF Analytics | 253 ms | 1 ms | 253× |
| Channels | 206 ms | 1 ms | 206× |
| Node Health | 195 ms | 1 ms | 195× |
| Node Detail | 133 ms | 1 ms | 133× |
What Changed
In-Memory Packet Store (packet-store.js)
- All packets loaded into RAM on startup (~28K = ~12MB)
- Map-indexed by id, hash, observer, and node — O(1) lookups
- Ring buffer with configurable max memory (default 1GB)
- SQLite is now write-only for packets
TTL Cache with Stale-While-Revalidate
- All computed responses cached with configurable TTLs
- Smart invalidation: packet bursts only invalidate channels/observers, analytics expire by TTL
- Pre-warmed on startup: subpaths, RF, topology, channels, hash-sizes, bulk-health
- Stale-while-revalidate: expired entries served instantly while one recompute runs in background — no cache stampedes
Telemetry
/api/healthendpoint: process memory, event loop lag (p50/p95/p99/max), cache hit rate + SWR stats, WebSocket client count, packet store size- Perf dashboard (
#/perf) enhanced with system health cards, color-coded thresholds
Eliminated All LIKE Scans
- Every node endpoint was doing
decoded_json LIKE '%pubkey%'full-table scans - Replaced with O(1)
pktStore.byNodeMap lookups
Other
- RF response compression: 1MB → 15KB (server-side histograms + downsampling)
- Client-side WebSocket prepend (no API re-fetch on new packets)
- All TTLs configurable via
config.json - A/B benchmark script (
benchmark-ab.sh) - Favicon added 🔺
- Bug fix: live page null guard on animatePacket
What Didn't Work (and why)
setIntervalbackground refresh: Blocked Node.js event loop. 3ms → 1,200ms. Reverted.- Worker threads:
structuredCloneoverhead (416ms for 28K packets) negated compute savings.
See PERFORMANCE.md for full details.