github rancher/rke2 v1.22.9+rke2r1

latest releases: v1.31.1-rc3+rke2r1, v1.30.5-rc3+rke2r1, v1.29.9-rc3+rke2r1...
2 years ago

This release updates Kubernetes to v1.22.9, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.22.8+rke2r1:

  • Fix windows dev images to include proper runtime tag (#2734)
  • Add message in the killall script (#2742)
  • Update cilium to support automatic dual-stack config (#2748)
  • vSphere Chart Updates (#2759)
  • Bump harvester cloud provider 0.1.11 (#2767)
  • Bump containerd to v1.5.11-k3s1 (#2769)
  • Bump Harvester csi to v0.1.11 (#2779)
  • Bump etcd to v3.5.3-k3s1 (#2764)
  • Remove kube-proxy static pod manifest when --disable-kube-proxy is set (#2786)
  • Update k8s to v1.22.9 (#2795)
  • Secrets-encrypt backports (#2798)
  • Remove failure:ignore tag for s390x pipeline (#2791)
  • Revert "[Release-1.22] Remove failure:ignore tag for s390x pipeline" (#2802)
  • Also inject system-default-registry as global.cattle.systemDefaultRegistry (#2806)
  • Bump k3s for April Fixes (#2813)
  • Bump containerd for selinux fix (#2821)
  • Upgrade ingress nginx chart and images (#2825)

Packaged Component Versions

Component Version
Kubernetes v1.22.9
Etcd v3.5.3-k3s1
Containerd v1.5.11-k3s2
Runc v1.0.3
Metrics-server v0.5.0
CoreDNS v1.9.1
Ingress-Nginx 4.1.0
Helm-controller v0.12.1

Available CNIs

Component Version FIPS Compliant
Canal (Default) Flannel v0.17.0
Calico v3.21.4
Yes
Calico v3.21.4 No
Cilium v1.11.2 No
Multus v3.7.1 No

Known Issues

  • #1447 - When restoring RKE2 from backup to a new node, you should ensure that all pods are stopped following the initial restore:
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION=v1.22.9+rke2r1
rke2 server \
  --cluster-reset \
  --cluster-reset-restore-path=<PATH-TO-SNAPSHOT> --token <token used in the original cluster>
rke2-killall.sh
systemctl enable rke2-server
systemctl start rke2-server

Helpful Links

As always, we welcome and appreciate feedback from our community of users. Please feel free to:

Don't miss a new rke2 release

NewReleases is sending notifications on new releases.