This release updates Kubernetes to v1.20.8
For more details on what's new, see the Kubernetes release notes
Upgrade Notes
If you installed RKE2 from RPMs (default on RHEL-based distributions), you will need to either re-run the installer, or edit /etc/yum.repos.d/rancher-rke2.repo
to point at the latest/1.20
or stable/1.20
channel (depending on how quickly you would like to receive new releases) in order to update RKE2 via yum.
Changes since v1.20.7+rke2r2
- Upgrade Kubernetes to v1.20.8
(#1135) - The built-in vsphere helm chart is now enabled by passing
--cloud-provider-name=rancher-vsphere
, instead of simplyvsphere
. The implementation from the previous release made it impossible to enable the in-tree vsphere cloud provider.
(#1115) - CIS checks no longer require the
etcd
user be present on agent nodes.
(#1063) - A message is now written to the logs when etcd snapshots are disabled.
(#1123) - Cluster bootstrap data (certs, etc) are now more reliably written to the datastore.
(#1117) - The kube-proxy helm chart no longer quotes boolean values.
(#1131) - kubernetes.default.svc is now included in the default SANs on the Kubernetes API serving certificate.
(#1113) - RBAC resources for the RKE integrated cloud-controller-manager are now uniquely named.
(#1118)
Packaged Component Versions
Component | Version |
---|---|
Kubernetes | v1.20.8 |
Etcd | v3.4.13-k3s1 |
Containerd | v1.4.4-k3s2 |
Runc | v1.0.0-rc95 |
CNI Plugins | v0.8.7 |
Flannel | v0.13.0-rancher1 |
Calico | v3.13.3 |
Metrics-server | v0.3.6 |
CoreDNS | v1.6.9 |
Ingress-Nginx | v1.36.3 |
Helm-controller | v0.9.2 |
Known Issues
- #786 - NetworkManager interferes with network related components. If your node has NetworkManager installed and enabled, please referer to RKE2 Docs for a workaround.
- #1118 - RKE2 integrated cloud-controller-manager RBAC conflicts with out-of-tree Helm charts. RBAC roles have been renamed to no longer conflict, but if you are upgrading from an earlier release and plan on installing an out-of-tree cloud controller you should run the following command to clean up the legacy roles:
kubectl delete clusterrole,clusterrolebinding cloud-controller-manager
Helpful Links
As always, we welcome and appreciate feedback from our community of users. Please feel free to:
- Open issues here
- Join our Slack channel
- Check out our documentation for guidance on how to get started.