This release updates Kubernetes to v1.21.3.
Before upgrading from earlier releases, be sure to read the Kubernetes Urgent Upgrade Notes.
⚠️ Important Upgrade Note ⚠️
If you are using K3s in a HA configuration with an external SQL datastore, and your server (control-plane) nodes were not started with the --token
CLI flag, you will no longer be able to add additional K3s servers to the cluster without specifying the token. Ensure that you retain a copy of this token, as is required when restoring from backup. Previously, K3s did not enforce the use of a token when using external SQL datastores.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/k3s/server/token
Changes since K3s v1.21.2+k3s1:
- Upgrade Kubernetes to v1.21.3 (#3652)
- Upgrade runc to v1.0.0 (#3603)
- Upgrade k3s-root to v0.9.1 (#3665)
- Upgrade containerd to v1.4.8-k3s1 (#3683)
Addresses GHSA-c72p-9xmj-rx3w - Bootstrap data is now reliably encrypted with the cluster token (#3514/#3690)
Addresses GHSA-cxm9-4m6p-24mc - The K3s cloud controller has been moved into a dedicated executor to improve resilience under high datastore latency (#3530)
- Directories created for local-path-provisioner now have more restrictive permissions (#3548)
- The embedded Helm controller more reliably handles interrupted Helm operations (#3550)
- The embedded kube-router network policy controller has been updated, offering enhanced performance and security (#3595)
k3s etcd-snapshot
error messages are now more helpful (#3592)- The in-cluster list of available etcd snapshots is no longer updated when snapshots are disabled (#3610)
- The deploy controller now includes the node id in the field manager name (#3614)
- The deploy controller now emits events when managing AddOn resources (#3615)
k3s etcd-snapshot prune
now correctly prunes snapshots with custom names (#3672)
Known Issues:
- There is a regression that may cause issues with deleting nodes due to finalizers not being removed. If you observe a node is stuck for some time and is not being deleted you can describe the node to see if any finalizers remain. If there are any finalizers, you can work around this issue by running the following command to remove the finalizers:
# replace $NODENAME with the name of the node
kubectl patch node $NODENAME -p '{"metadata":{"finalizers":[]}}' --type=merge
Embedded Component Versions
Component | Version |
---|---|
Kubernetes | v1.21.3 |
Kine | v0.6.2 |
SQLite | 3.33.0 |
Etcd | v3.4.13-k3s1 |
Containerd | v1.4.8-k3s1 |
Runc | v1.0.0 |
Flannel | v0.14.0 |
Metrics-server | v0.3.6 |
Traefik | v2.4.8 |
CoreDNS | v1.8.3 |
Helm-controller | v0.10.1 |
Local-path-provisioner | v0.0.19 |
Helpful Links
As always, we welcome and appreciate feedback from our community of users. Please feel free to:
- Open issues here
- Join our Slack channel
- Check out our documentation for guidance on how to get started or to dive deep into K3s.
- Read how you can contribute here