[0.21.0] - 2025-08-17
Context
Previously, kftray checked for pod and service changes every few seconds, which was pretty slow and ate up more resources than needed. State management would sometimes get out of sync after crashes, and the app would try to reconnect way too often during network drops.
Main Changes
The port forwarding now uses Kubernetes watchers and reflectors to keep tabs on pods instead of constantly asking "what's changed?". This is basically the same pattern Kubernetes operators use - watching for stuff to happen instead of checking over and over.
What's different:
- Pod crashes, restarts, and IP changes get picked up right away
- Way fewer API calls hitting your cluster
- Better connection handling when things go sideways
How It Works
The new setup keeps a local copy of pod states using Kubernetes watchers, just like operators do with custom resources. When you start a port forward, kftray grabs pod info from this local cache instead of hitting the API.
The implementation uses kube-rs's watcher API, which opens a long-lived connection to the API server. This connection receives a stream of events (Added, Modified, Deleted) for pods matching our label selectors. The reflector maintains an in-memory store that stays synchronized with the cluster state through these events.
The reflector handles:
- List/Watch pattern: Initial list to populate the cache, then watch for incremental updates
- Automatic reconnection: If the watch connection drops, it resumes from the last known resource version
- Event deduplication: Filters out duplicate events and handles out-of-order updates
This means kftray always has an accurate view of pod states without making repeated API calls, and can react to changes in milliseconds instead of waiting for the next polling cycle.
What this means:
- Port forwards react to pod changes instantly
- Less load on your Kubernetes cluster
- Connections handle network issues without freaking out
Yeah, using the operator pattern for port forwarding is probably overkill, but it actually works pretty well.
Other Changes
Made a bunch of stuff faster and more stable:
- Prewarmed Connections: Connections are now prewarmed and ready to handle traffic immediately, making port forwards more stable and responsive
- Network Recovery: The network monitor now waits a bit for things to settle down before trying to reconnect, instead of hammering away during network blips
- State Management: Port forwards keep track of their process ID to clean up after themselves if kftray dies unexpectedly, so you don't end up with dead connections showing in the UI
- Client Caching: Reuses Kubernetes client connections instead of making new ones all the time
- TCP Tuning: Tweaked socket settings for better throughput (TCP_NODELAY, bigger buffers, etc.)
- Parallel Health Checks: Status checks now run at the same time instead of one by one
Demo Video
In this quick demo video, I've tested both tools with the same setup - port forwarding to a service while running curl in a loop. Then deleted all pods with kubectl delete pods --all --force to see how each handles recovery.
-
kubectl port forward: When pods get deleted, the port forward just dies even though it's forwarding to a service. All requests fail and you have to manually restart it.
-
kftray: Loses maybe one request when pods get deleted. The watcher detects changes immediately and reconnects to new pods as they come up. The curl loop keeps going like nothing happened.
Areanew.mp4
Blog Post: https://kftray.app/blog/posts/14-kftray-v0-21-updates