This release updates a number of RKE2 components to address and resolve some minor issues identified on v1.21.12+rke2r1
release.
Important Note
If your server (control-plane) nodes were not started with the --token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.21.12+rke2r1:
- Bump rke2-ingress-nginx to remove accidentally committed .orig files (#2843)
- Bump dynamiclistener to fix apiserver outage issue (#2847)
- update Kubernetes to 1.21.12 r2 (#2832)
Packaged Component Versions
Component | Version |
---|---|
Kubernetes | v1.21.12 |
Etcd | v3.4.18-k3s1 |
Containerd | v1.4.13-k3s1 |
Runc | v1.0.3 |
Metrics-server | v0.5.0 |
CoreDNS | v1.9.1 |
Ingress-Nginx | 4.1.0 |
Helm-controller | v0.12.1 |
Available CNIs
Component | Version | FIPS Compliant |
---|---|---|
Canal (Default) | Flannel v0.17.0 Calico v3.19.2 | Yes |
Calico | v3.19.2 | No |
Cilium | v1.11.2 | No |
Multus | v3.7.1 | No |
Known Issues
- #1447 - When restoring RKE2 from backup to a new node, you should ensure that all pods are stopped following the initial restore:
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION=v1.21.12+rke2r2
rke2 server \
--cluster-reset \
--cluster-reset-restore-path=<PATH-TO-SNAPSHOT> --token <token used in the original cluster>
rke2-killall.sh
systemctl enable rke2-server
systemctl start rke2-server
Helpful Links
As always, we welcome and appreciate feedback from our community of users. Please feel free to:
- Open issues here
- Join our Slack channel
- Check out our documentation for guidance on how to get started.