github ipfs/kubo v0.39.0

11 hours ago

Note

This release was brought to you by the Shipyard team.

  • Overview
  • ๐Ÿ”ฆ Highlights
    • ๐ŸŽฏ DHT Sweep provider is now the default
    • โšก Fast root CID providing for immediate content discovery
    • โฏ๏ธ Provider state persists across restarts
    • ๐Ÿ“Š Detailed statistics with ipfs provide stat
    • ๐Ÿ”” Slow reprovide warnings
    • ๐Ÿ“Š Metric rename: provider_provides_total
    • ๐Ÿ”ง Automatic UPnP recovery after router restarts
    • ๐Ÿชฆ Deprecated go-ipfs name no longer published
    • ๐Ÿšฆ Gateway range request limits for CDN compatibility
    • ๐Ÿ–ฅ๏ธ RISC-V support with prebuilt binaries
  • ๐Ÿ“ฆ๏ธ Important dependency updates
  • ๐Ÿ“ Changelog
  • ๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Contributors

Overview

This release is an important step toward solving the DHT bottleneck for self-hosting IPFS on consumer hardware and home networks. The DHT sweep provider (now default) announces your content to the network without traffic spikes that overwhelm residential connections. Automatic UPnP recovery means your node stays reachable after router restarts without manual intervention.

New content becomes findable immediately after ipfs add. The provider system persists state across restarts, alerts you when falling behind, and exposes detailed stats for monitoring. This release also finalizes the deprecation of the legacy go-ipfs name.

๐Ÿ”ฆ Highlights

๐ŸŽฏ DHT Sweep provider is now the default

The Amino DHT Sweep provider system, introduced as experimental in v0.38, is now enabled by default (Provide.DHT.SweepEnabled=true).

What this means: All nodes now benefit from efficient keyspace-sweeping content announcements that reduce memory overhead and create predictable network patterns, especially for nodes providing large content collections.

Migration: The transition is automatic on upgrade. Your existing configuration is preserved:

  • If you explicitly set Provide.DHT.SweepEnabled=false in v0.38, you'll continue using the legacy provider
  • If you were using the default settings, you'll automatically get the sweep provider
  • To opt out and return to legacy behavior: ipfs config --json Provide.DHT.SweepEnabled false
  • Providers with medium to large datasets may need to adjust defaults; see Capacity Planning

New features available with sweep mode:

  • Detailed statistics via ipfs provide stat (see below)
  • Automatic resume after restarts with persistent state (see below)
  • Proactive alerts when reproviding falls behind (see below)
  • Better metrics for monitoring (provider_provides_total) (see below)
  • Fast optimistic provide of new root CIDs (see below)

For background on the sweep provider design and motivations, see Provide.DHT.SweepEnabled and Shipyard's blogpost Provide Sweep: Solving the DHT Provide Bottleneck.

โšก Fast root CID providing for immediate content discovery

When you add content to IPFS, the sweep provider queues it for efficient DHT provides over time. While this is resource-efficient, other peers won't find your content immediately after ipfs add or ipfs dag import completes.

To make sharing faster, ipfs add and ipfs dag import now do an immediate provide of root CIDs to the DHT in addition to the regular queue (controlled by the new --fast-provide-root flag, enabled by default). This complements the sweep provider system: fast-provide handles the urgent case (root CIDs that users share and reference), while the sweep provider efficiently provides all blocks according to Provide.Strategy over time.

This closes the gap between command completion and content shareability: root CIDs typically become discoverable on the network in under a second (compared to 30+ seconds previously). The feature uses optimistic DHT operations, which are significantly faster with the sweep provider (now enabled by default).

By default, this immediate provide runs in the background without blocking the command. For use cases requiring guaranteed discoverability before the command returns (e.g., sharing a link immediately), use --fast-provide-wait to block until the provide completes.

Simple examples:

ipfs add file.txt                     # Root provided immediately, blocks queued for sweep provider
ipfs add file.txt --fast-provide-wait # Wait for root provide to complete
ipfs dag import file.car              # Same for CAR imports

Configuration: Set defaults via Import.FastProvideRoot (default: true) and Import.FastProvideWait (default: false). See ipfs add --help and ipfs dag import --help for more details and examples.

This optimization works best with the sweep provider and accelerated DHT client, where provide operations are significantly faster. Automatically skipped when DHT is unavailable (e.g., Routing.Type=none or delegated-only configurations).

โฏ๏ธ Provider state persists across restarts

The Sweep provider now persists the reprovide cycle state and automatically resumes where it left off after a restart. This brings several improvements:

  • Persistent progress: The provider saves its position in the reprovide cycle to the datastore. On restart, it continues from where it stopped instead of starting from scratch.
  • Catch-up reproviding: If the node was offline for an extended period, all CIDs that haven't been reprovided within the configured reprovide interval are immediately queued for reproviding when the node starts up. This ensures content availability is maintained even after downtime.
  • Persistent provide queue: The provide queue is persisted to the datastore on shutdown. When the node restarts, queued CIDs are restored and provided as expected, preventing loss of pending provide operations.
  • Resume control: The resume behavior is controlled via Provide.DHT.ResumeEnabled (default: true). Set to false if you don't want to keep the persisted provider state from a previous run.

This feature improves reliability for nodes that experience intermittent connectivity or restarts.

๐Ÿ“Š Detailed statistics with ipfs provide stat

The Sweep provider system now exposes detailed statistics through ipfs provide stat, helping you monitor provider health and troubleshoot issues.

Run ipfs provide stat for a quick summary, or use --all to see complete metrics including connectivity status, queue sizes, reprovide schedules, network statistics, operation rates, and worker utilization. For real-time monitoring, use watch ipfs provide stat --all --compact to observe changes in a 2-column layout. Individual sections can be displayed with flags like --network, --operations, or --workers.

For Dual DHT configurations, use --lan to view LAN DHT statistics instead of the default WAN DHT stats.

For more information, run ipfs provide stat --help or see the Provide Stats documentation, including Capacity Planning.

Note

Legacy provider (when Provide.DHT.SweepEnabled=false) shows basic statistics without flag support.

๐Ÿ”” Slow reprovide warnings

Kubo now monitors DHT reprovide operations when Provide.DHT.SweepEnabled=true
and alerts you if your node is falling behind on reprovides.

When the reprovide queue consistently grows and all periodic workers are busy,
a warning displays with:

  • Queue size and worker utilization details
  • Recommended solutions: increase Provide.DHT.MaxWorkers or Provide.DHT.DedicatedPeriodicWorkers
  • Command to monitor real-time progress: watch ipfs provide stat --all --compact

The alert polls every 15 minutes (to avoid alert fatigue while catching
persistent issues) and only triggers after sustained growth across multiple
intervals. The legacy provider is unaffected by this change.

๐Ÿ“Š Metric rename: provider_provides_total

The Amino DHT Sweep provider metric has been renamed from total_provide_count_total to provider_provides_total to follow OpenTelemetry naming conventions and maintain consistency with other kad-dht metrics (which use dot notation like rpc.inbound.messages, rpc.outbound.requests, etc.).

Migration: If you have Prometheus queries, dashboards, or alerts monitoring the old total_provide_count_total metric, update them to use provider_provides_total instead. This affects all nodes using sweep mode, which is now the default in v0.39 (previously opt-in experimental in v0.38).

๐Ÿ”ง Automatic UPnP recovery after router restarts

Kubo now automatically recovers UPnP port mappings when routers restart or
become temporarily unavailable, fixing a critical connectivity issue that
affected self-hosted nodes behind NAT.

Previous behavior: When a UPnP-enabled router restarted, Kubo would lose
its port mapping and fail to re-establish it automatically. Nodes would become
unreachable to the network until the daemon was manually restarted, forcing
reliance on relay connections which degraded performance.

New behavior: The upgraded go-libp2p (v0.44.0) includes Shipyard's fix
for self-healing NAT mappings that automatically rediscover and re-establish
port forwarding after router events. Nodes now maintain public connectivity
without manual intervention.

Note

If your node runs behind a router and you haven't manually configured port
forwarding, make sure Swarm.DisableNatPortMap=false
so UPnP can automatically handle port mapping (this is the default).

This significantly improves reliability for desktop and self-hosted IPFS nodes
using UPnP for NAT traversal.

๐Ÿชฆ Deprecated go-ipfs name no longer published

The go-ipfs name was deprecated in 2022 and renamed to kubo. Starting with this release, the legacy Docker image name has been replaced with a stub that displays an error message directing users to switch to ipfs/kubo.

Docker images: The ipfs/go-ipfs image tags now contain only a stub script that exits with an error, instructing users to update their Docker configurations to use ipfs/kubo instead. This ensures users are aware of the deprecation while allowing existing automation to fail explicitly rather than silently using outdated images.

Distribution binaries: Download Kubo from https://dist.ipfs.tech/kubo/ or https://github.com/ipfs/kubo/releases. The legacy go-ipfs distribution path should no longer be used.

All users should migrate to the kubo name in their scripts and configurations.

๐Ÿšฆ Gateway range request limits for CDN compatibility

The new Gateway.MaxRangeRequestFileSize configuration protects against CDN range request limitations that cause bandwidth overcharges on deserialized responses. Some CDNs convert range requests over large files into full file downloads, causing clients requesting small byte ranges to unknowingly download entire multi-gigabyte files.

This only impacts deserialized responses. Clients using verifiable block requests (application/vnd.ipld.raw) are not affected. See the configuration documentation for details.

๐Ÿ–ฅ๏ธ RISC-V support with prebuilt binaries

Kubo provides official linux-riscv64 prebuilt binaries, bringing IPFS to RISC-V open hardware.

As RISC-V single-board computers and embedded systems become more accessible, the distributed web is now supported on open hardware architectures - a natural pairing of open technologies.

Download from https://dist.ipfs.tech/kubo/ or https://github.com/ipfs/kubo/releases and look for the linux-riscv64 archive.

๐Ÿ“ฆ๏ธ Important dependency updates

  • update go-libp2p to v0.45.0 (incl. v0.44.0) with self-healing UPnP port mappings and go-log/slog interop fixes
  • update quic-go to v0.55.0
  • update go-log to v2.9.0 with slog integration for go-libp2p
  • update go-ds-pebble to v0.5.7 (includes pebble v2.1.2)
  • update boxo to v0.35.2 (includes boxo v0.35.1)
  • update ipfs-webui to v4.10.0
  • update go-libp2p-kad-dht to v0.36.0

๐Ÿ“ Changelog

Full Changelog

๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Contributors

Contributor Commits Lines ยฑ Files Changed
@guillaumemichel 41 +9906/-1383 170
@lidel 30 +6652/-694 97
@sukunrt 9 +1618/-1524 39
@MarcoPolo 17 +1665/-1452 160
@gammazero 23 +514/-53 29
@Prabhat1308 1 +197/-67 4
@peterargue 3 +82/-25 5
@cargoedit 1 +35/-72 14
@hsanjuan 2 +66/-29 5
@shoriwe 1 +68/-21 3
@dennis-tra 2 +27/-2 2
@Lil-Duckling-22 1 +4/-1 1
@crStiv 1 +1/-3 1
@cpeliciari 1 +3/-0 1
@rvagg 1 +1/-1 1
@p-shahi 1 +1/-1 1
@lbarrettanderson 1 +1/-1 1
@filipremb 1 +1/-1 1
@marten-seemann 1 +0/-1 1

Don't miss a new kubo release

NewReleases is sending notifications on new releases.