This release updates Kubernetes to v1.21.3.
For more details on what's new, see the Kubernetes release notes. If you are coming from v1.20 or earlier, it is recommended that you read the Urgent Upgrade Notes.
Important Note
If your server (control-plane) nodes were not started with the --token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.21.2+rke2r1
- Upgrade Kubernetes to v1.21.3 (#1421)
- Upgrade containerd to v1.4.8-k3s1 (#1398)
Addresses GHSA-c72p-9xmj-rx3w - Upgrade runc to v1.0.0 #1267
- Bootstrap data is now reliably encrypted with the cluster token (#1428)
Addresses GHSA-hvj9-vfxp-c3cf - Experimental support for Windows agents has been added (#1268)
- The nginx ingress controller now uses hardened images (#1271)
- The rke2-kube-proxy chart has been deprecated; kube-proxy now runs as a static pod (#1205)
With this change, the--kube-proxy-args
flag is now supported in RKE2. RKE2 will automatically disable the kube-proxy static pod and retain the legacy rke2-kube-proxy if a rke2-kube-proxy HelmChartConfig resource is detected. - The RKE2 cloud controller has been moved into a static pod to improve resilience under high datastore latency (#1216)
Packaged Component Versions
Component | Version |
---|---|
Kubernetes | v1.21.3 |
Etcd | v3.4.13-k3s1 |
Containerd | v1.4.8-k3s1 |
Runc | v1.0.0 |
CNI Plugins | v0.8.7 |
Metrics-server | v0.3.6 |
CoreDNS | v1.6.9 |
Ingress-Nginx | 3.34.001 |
Helm-controller | v0.10.1 |
Available CNIs
Component | Version | FIPS Compliant |
---|---|---|
Canal (Default) | Flannel v0.13.0-rancher1 Calico v3.13.3 | Yes |
Calico | v3.19.1 | No |
Cilium | v1.9.8 | No |
Multus | v3.7.1 | No |
Known Issues
- #1447 - When restoring RKE2 from backup to a new node, you should ensure that all pods are stopped following the initial restore:
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION=v1.21.3+rke2r1
rke2 server \
--cluster-reset \
--cluster-reset-restore-path=<PATH-TO-SNAPSHOT> --token <token used in the original cluster>
rke2-killall.sh
systemctl enable rke2-server
systemctl start rke2-server
Helpful Links
As always, we welcome and appreciate feedback from our community of users. Please feel free to:
- Open issues here
- Join our Slack channel
- Check out our documentation for guidance on how to get started.