This release updates Kubernetes to v1.20.10, and fixes a number of minor issues.
For more details on what's new, see the Kubernetes release notes.
If your server (control-plane) nodes were not started with the
--token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
Changes since v1.20.9+rke2r2
- Upgrade Kubernetes to v1.20.10 (#1547)
- Resolved an issue where restoration fails on etcd only nodes due to invalid state of load balancers (#1658)
- Resolved an issue with URL pruning when an etcd member would join the cluster (#1668)
- Resolved an issue to have RKE2 use the FQDN if cloud provider is set to AWS (#1635)
- Resolved an issue where S3 snapshots weren't showing in the Rancher UI (#1579)
Packaged Component Versions
- #1074 - Control-plane components may fail to start with "bind: address already in use" message. This will be resolved in a future release.
- #1447 - When restoring RKE2 from backup to a new node, you should ensure that all pods are stopped following the initial restore:
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION=v1.20.10+rke2r1 rke2 server \ --cluster-reset \ --cluster-reset-restore-path=<PATH-TO-SNAPSHOT> --token <token used in the original cluster> rke2-killall.sh systemctl enable rke2-server systemctl start rke2-server
As always, we welcome and appreciate feedback from our community of users. Please feel free to: