github doobidoo/mcp-memory-service v8.48.0
v8.48.0 - CSV-Based Metadata Compression

latest releases: v10.47.1, v10.47.0, v10.46.0...
4 months ago

CSV-Based Metadata Compression for Cloudflare Sync

This release implements an intelligent metadata compression system that resolves Cloudflare D1 metadata size limit issues while maintaining full backward compatibility.

Key Features

📦 Intelligent CSV-Based Compression

  • 78% size reduction: Typical metadata compressed from 732B to 159B
  • CSV encoding/decoding for quality and consolidation metadata
  • Provider code mapping: Reduces provider names by 70% (onnx_local → ox, groq_llama3_70b → gp)
  • Transparent operation: Automatic compression on write, decompression on read

✅ 100% Sync Success Rate

  • Resolved all Cloudflare sync failures (1 stuck operation → 0 failures)
  • Pre-sync validation: Metadata size checks (<9.5KB) prevent 400 Bad Request errors
  • Cloudflare D1 10KB limit no longer a constraint for quality/consolidation metadata

🎯 Smart Metadata Optimizations

  • ai_scores history: Limited to 3 most recent entries (reduced from 10)
  • quality_components: Removed from sync (debug-only, reconstructible locally)
  • Cloudflare-specific suppression: metadata_source, last_quality_check fields excluded

Technical Details

Architecture: Phase 1 of 3-phase metadata optimization plan

  • Phase 1 (COMPLETE): CSV-based compression for quality/consolidation metadata
  • 📋 Phase 2 (AVAILABLE): Binary encoding with struct/msgpack (85-90% reduction target)
  • 📋 Phase 3 (AVAILABLE): Reference-based deduplication for repeated values

Files Changed:

  • NEW: src/mcp_memory_service/quality/metadata_codec.py - CSV encoding/decoding functions
  • MODIFIED: src/mcp_memory_service/storage/hybrid.py - Metadata validation and compression integration
  • MODIFIED: src/mcp_memory_service/storage/cloudflare.py - Decompression on retrieval
  • NEW: verify_compression.sh - Verification script for compression testing

Performance Impact

  • Compression overhead: <1ms per operation (negligible)
  • Backward compatibility: Fully transparent to all operations
  • Testing: All quality system tests passing, sync queue empty, 3,750 ONNX-scored memories verified

What's Fixed

  • Cloudflare sync failures due to metadata size exceeding 10KB limit
  • 400 Bad Request errors from D1 API when quality metadata was too large
  • Stuck operations in retry queue (operations_failed: 1 → 0)

Upgrade Notes

No configuration changes required - compression is automatic and transparent. The system seamlessly handles both compressed and uncompressed metadata for backward compatibility.


Full Changelog: https://github.com/doobidoo/mcp-memory-service/blob/main/CHANGELOG.md#8480---2025-12-07

Don't miss a new mcp-memory-service release

NewReleases is sending notifications on new releases.