This release updates Kubernetes to v1.21.4, and fixes a number of issues.
For more details on what's new, see the Kubernetes release notes.
Important Note
If your server (control-plane) nodes were not started with the --token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.21.3+rke2r1
- Upgrade Kubernetes to v1.21.4 (#1546)
- Updated Calico to 3.19.2 (#1693)
- Resolved an issue where pods would fail to deploy if nm-setup-cloud is enabled (#1670)
- Resolved an issue where restoration fails on etcd only nodes due to invalid state of load balancers (#1657)
- Resolved an issue with URL pruning when an etcd member would join the cluster (#1667)
- Resolved an issue where Calico was not processing the ipamconfig correectly (#1624)
- Allows for the customization of static pod manifests (#1525)
- Resolved an issue to have RKE2 use the FQDN if cloud provider is set to AWS (#1618)
- Fixed an issue where the wrong HNS function is being called (#1604)
- Resolved an issue where S3 snapshots weren't showing in the Rancher UI (#1551)
- Resolved an issue where a node would become stuck at deletion due to a finalizer (#1561)
- Add additional annotations to PSPs for Rancher integration (#1518)
- Resolved an issue where systemd doesn't get notified with "READY" state in etcd nodes (#1485)
- Resolved an issue where workloads were unable to run with the default Microsoft Pause image (#1522)
- Added Windows uninstall script (#1497)
- Resolved an issue where the Windows service fails due to a timeout (#1499)
- Updated k8s, containerd, hardened build base, and golanglit-ci versions in standard and Windows Dockerfiles (#1482) (#1481)
- Resolved an issue where the default deployment of rke2-ingress-nginx had the load balancer service enabled (#1466)
Packaged Component Versions
Component | Version |
---|---|
Kubernetes | v1.21.4 |
Etcd | v3.4.13-k3s1 |
Containerd | v1.4.9-k3s1 |
Runc | v1.0.0 |
CNI Plugins | v0.8.7 |
Metrics-server | v0.3.6 |
CoreDNS | v1.8;3 |
Ingress-Nginx | 3.34.001 |
Helm-controller | v0.10.6 |
Available CNIs
Component | Version | FIPS Compliant |
---|---|---|
Canal (Default) | Flannel v0.13.0-rancher1 Calico v3.13.3 | Yes |
Calico | v3.19.2 | No |
Cilium | v1.9.8 | No |
Multus | v3.7.1 | No |
Known Issues
- #1447 - When restoring RKE2 from backup to a new node, you should ensure that all pods are stopped following the initial restore:
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION=v1.21.4+rke2r1
rke2 server \
--cluster-reset \
--cluster-reset-restore-path=<PATH-TO-SNAPSHOT> --token <token used in the original cluster>
rke2-killall.sh
systemctl enable rke2-server
systemctl start rke2-server
Helpful Links
As always, we welcome and appreciate feedback from our community of users. Please feel free to:
- Open issues here
- Join our Slack channel
- Check out our documentation for guidance on how to get started.