This release updates Kubernetes to v1.18.10
For more details on what's new, see the Kubernetes release notes
Changes since v1.18.9+rke2r1
RKE2, also known as RKE Government, is Rancher's next-generation Kubernetes distribution.
It is a fully conformant (currently under review) Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector.
To meet these goals, RKE2 does the following:
- Provides defaults and configuration options that allow clusters to pass the CIS Kubernetes Benchmark with minimal operator intervention
- Enables FIPS 140-2 compliance
- Supports SELinux policy and Multi-Category Security (MCS) label enforcement
- Recompiles, repackages, and scans components for CVEs using trivy in our build pipeline
RKE2 also brings together the best aspects of Rancher's RKE1 and K3s distributions. For more on this, check out our docs.
This release also addresses the following upstream CVEs:
- CVE-2020-8564 - Docker config secrets leaked when file is malformed and loglevel >= 4
- CVE-2020-8566 - Vulnerable if Ceph RBD volumes are supported and kube-controller-manager is using logLevel >= 4
You can read more about the CVEs here.
Try it Now
On a modern AMD64 Linux host, run:
curl -sfL https://get.rke2.io | sh -
See our Quick Start Guide for more.
Features
RKE2 introduces the following features and functionality to make operating a secure Kubernetes cluster easier than ever.
Ease of Deployment
RKE2 is deployed as a single binary that then launches the entire Kubernetes stack on your host. Containerd and the kubelet are launched as host-level processes managed by RKE2. Etcd and Kubernetes control plane components are then launched as static pods by the kubelet. Finally, additional components such as Canal CNI, CoreDNS, NGINX ingress, kube-proxy, and metrics-server are launched as customizable Helm charts.
For more details on the design of RKE2, see our architecture documentation.
Security and Compliance Focused
RKE2 has been built from the ground up with security and compliance in mind. Here are the key features and functionality in this area:
SELinux Support - Working with upstream, we've brought SELinux Multi-Category Security (MCS) Labels enforcement to containerd and incorporated that into RKE2. When installing RKE2 using our RPMs, required SELinux policies will automatically be installed and MCS Label enforcement enabled in containerd. See upstream PRs: containerd/cri#1487 and containers/container-selinux#98.
FIPS 140-2 Enablement - RKE2 and the components it deploys (with the exception of NGINX ingress) have been compiled to leverage the FIPS 140-2 validated BoringCrypto module. Moreover, this module is currently being revalidated as the Rancher Kubernetes Cryptographic Library for the additional platforms supported by RKE2.
Simple CIS Kubernetes Benchmark v1.5 Compliance - RKE2 was designed to pass the majority of this benchmark without any additional configuration. Some controls can impact cluster functionality or require host-level modifications. So, operators can opt-in to passing these controls. To do so, operators can follow the simple hardening guide and launch the cluster with our cis-1.5 profile. Running in the CIS profile will ensure that host-level requirements have been met and enforce cluster-level requirements, such as turning on restrictive PodSecurityPolicies.
Secure and Hardened Images and Binaries - Upstream components are recompiled, repackaged, and scanned (excluding NGINX ingress). This reduces operators' exposure to critical CVEs.
Etcd Cluster Management
Etcd management can be one of the most challenging aspects of running a Kubernetes cluster, and RKE2 makes it simple.
As server (control plane) nodes are added and removed from your cluster, RKE2 will automatically add and remove them from the underlying etcd cluster, simplifying complex quorum management issues.
RKE2 also automatically takes snapshots of the etcd database and makes rollback and recovery of your cluster straight-forward. More details here.
Bootstrapping Custom Workloads
Like K3s, RKE2 makes bringing up a cluster with custom workloads incredibly simple.
Operators just need to drop Kubernetes manifests into RKE2's manifest directory. Learn more.
For an additional level of customization and control, operators can pair this with our built-in Helm controller and CRDs. Learn more
Embedded Component Versions
Component | Version |
---|---|
Kubernetes | v1.18.10 |
Etcd | v3.4.13-k3s1 |
Containerd | v1.3.6-k3s2 |
Runc | v1.0.0-rc92 |
CNI Plugins | v0.8.7 |
Flannel | v0.13.0-rancher1 |
Calico | v3.13.3 |
Metrics-server | v0.3.6 |
CoreDNS | v1.6.9 |
Ingress-Nginx | v1.36.3 |
Helm-controller | v0.7.3 |
Known Issues
- Audit logging cannot be configured in the Kubernetes api-server. The api-server is launched as a static pod and it doesn't have the necessary host bind mounts. #410
- Nodes may occasionally get stuck upon deletion. Please see the issue for a workaround to this. #401
- Helm-controller sometimes leave behind failed install job pods with reason "NodeAffinity". This is due to an upstream issue. This is harmless as the pods will be recreated and eventually succeed, but the failed pods will remain until manually deleted. #127
- The cluster-init flag is vestigial and not needed to set up embedded etcd, see the docs for details on how to set up a cluster. #426
These items will be addressed in an upcoming release.
Helpful Links
As always, we welcome and appreciate feedback from our community of users. Please feel free to:
- Open issues here
- Join our Slack channel
- Check out our documentation for guidance on how to get started.