We are proud to present STUNner v1.1.0, the second GA release of the STUNner Kubernetes media gateway for WebRTC brought to you by l7mp.io. This release marks the culmination of a half-year development and stabilization cycle to bring your favorite WebRTC ingress gateway suite to a carrier-grade level.
Most of the work in this release was focused on implementing the new carrier-grade commercial features in STUNner that will allow it to scale to extremely large and performance-sensitive deployments. The open source version was not left behind though: major additions are a new ICE tester tool to simplify testing a fresh STUNner installation, new tutorials, lots of improvements to stunnerctl
, plus countless improvements and fixes all around the place.
Deprecations
Originally, deploying STUNner required a separate installation of the stunner/stunner-gateway-operator
Helm chart to deploy the control plane, plus the stunner/stunner
chart that would deploy the dataplane that actually implements the TURN gateways. With the introduction of the managed dataplane mode this legacy dataplane mode has become obsolete, and it has gone mostly unmaintained in the last couple of releases. With the v1.1 release support for the legacy mode has been completely removed. In addition, the stunner/stunner
Helm chart will now install the gateway operator instead of a standalone dataplane.
Preparations for the commercial release
The open-source STUNner release has served our growing user base solidly throughout the last couple of years, making WebRTC available as a native Kubernetes service in a wide range of applications, from cloud-gaming to live streaming and conferencing. That being said, there are still a couple of issues that prevent large-scale enterprises to migrate from legacy TURN server frameworks to STUNner. Chiefly among these is performance (what else?): while Go, the programming language we used to develop STUNner, has provided solid performance for small to medium-scale deployments, the runtime overhead of the memory-managed Go runtime has proved to be a bottleneck in some extremely large use cases. In addition, operating STUNner as a public STUN or TURN service still does not go as smoothly as it should.
The upcoming commercial release will allow STUNner to scale to a carrier-grade level. Most importantly, our industry-leading Linux/eBPF based TURN acceleration framework reduces STUNner's CPU footprint more than hundred-fold compared to the default version. This makes it possible to serve beyond 100 Gbps TURN/UDP traffic with a single Linux server. User quota support helps mitigate DDoS attacks, and the STUN server mode, the Daemonset dataplane option, and the relay address discovery features simplify the deployment of STUNner as a public STUN and TURN service. The latter features will be available from the basic member tier, while the full enterprise tier will also include TURN offload on top of all member-tier features. All this with STUNner's unique scalability, security and observability that our users have come to expect throughout the years.
This release contains results of the integration work that will allow us to share the majority of code between the open-source and the commercial branches. Luckily, we could even release some of the features that did not fit into the commercial release in the open source version for free!
Expect the announcement on the first commercial STUNner release in the coming weeks.
ICE tester
Cloud providers' Kubernetes offerings differ widely in terms of WebRTC readiness; especially the UDP LoadBalancer support and the trillions of AWS/EKS subtleties have proved major pain points. Unsurprisingly, lots of users have asked for a reliable way to test and troubleshot a fresh STUNner installation.
The current best practice is to deploy a simple STUNner tutorial and check whether everything works as expected; we recommend the UDP echo service or the "simple-tunnel" tutorials for initial testing. While this is fine for verifying the cases when STUNner actually works, this does not provide useful clues to debug broken installations.
This release contains the first experimental release of the stunnerctl icetest
tool. Issuing stunnerctl icetest
will deploy a full WHIP server into your Kubernetes cluster, configure STUNner to expose it on a temporary gateway and fire up a WHIP client to connect to the server. If something breaks, the ICE tester tool will print a useful diagnostic message to help fix the problem. Otherwise it will preform a simple load test and report the ICE candidates along with some performance indicators (packet rate, loss, RTT) for both symmetric and asymmetric ICE over UDP as well as TCP.
Note that the ICE test tool is currently experimental: it works most of the time but lacks extensive testing. Please report any issues and help us make this tool a rock-solid helper utility in the upcoming releases.
Miscellaneous improvements
As usual, this release comes with a number of fixes, additions and improvements all around the place (see the commit logs below). Chiefly among these are new STUNner tutorials (most importantly, a new Elixir example), metric export ported to Open Telemetry, various stunnerctl
extensions, and several improvements in the STUNner authentication service.
Enjoy STUNner, join us at Discord, and don't forget to support us!
Commit logs
STUNner
feature: Add allocation lifecycle event reporting
feature: Add license status info to the CDS API
feature: Allow stunnerctl to mix JSON queries and strings in the Output
Feature: Allow custom user-ids in stunnerctl-auth
feature: Extend the CDS client for Node Address Discovery
feature: Implement ICE tester in stunnerctl
feature: Implement ICE tester server and client
feature: Implement license config manager stub
feature: Implement per-pod config patcher in the CDS server
fix: Allow running with empty auth when setting auth-type=none
fix: Fix race condition in logger
fix: Fix segfault when NewAuthHandler is called on an empty object
fix: Improve CDS server port randomization in tests
fix: Make sure listener URI address parses as a valid IP
fix: No longer delay config-delete messages in the CDS server
refactor: Reimplement ICE tester on top of standard WHIP
refactor: Rewrite metric exporting to Open Telemetry
chore: Add premium docs to ToC
chore: Build and publish icester images
chore: Bump Go version to 1.23
chore: Implement quota handler stub
chore: Improve error message consistency
chore: Make status printer smarter
chore: TURN offload integrations
chore: Update turncat minimum TLS to 1.2
doc: Add complete public TURN server config
doc: Add TURN offload to premium features
doc: Document premium features
doc: Remove the legacy dataplane mode
Gateway Operator
feature: Allow offload settings to be configured in the Dataplane
feature: Implement Node address patcher in the CDS server
feature: Pass the node name to stunnerd in env var
feature: Robustify the operator termination codepath
feature: Serve the current license status on the CDS server
fix: Align Dataplane spec field names with the K8s style guide
fix: Clean up lingering dataplane resources (Deployments/DaemonSets)
fix: Deprecate the rbac-proxy, fixes #185
fix: Generate NoMatchingParent status when route parent is missing
Fix: Prevent random restarts by declaring the metrics port in the pod manifest
fix: External node IP is preferred over external DNS name
refactor: Abstract config renderers behind a generic interface
refactor: Add detailed node address discovery logs
refactor: Simplify the node controller
chore: Add a license manager stub
chore: Integrate stub license manager into the rendering pipeline
chore: Quota and STUN-mode placeholders
chore: Take relay-address placeholder from stunner
chore: Update license manager stub from stunner
doc: Document that modifying the resource-type in the DP is unsafe
test: Robustify tests with CDS server port randomization
Authentication service
feature: Support custom public address in generated configurations
fix: Remove the relay-address placeholder from TURN URI addr
chore: Add go-generate tags to recreate the OpenAPI bindings
chore: Port the auth-service handler to the latest CDS API