artifacthub helm/rancher-stable/rancher 2.3.7
v2.3.7

latest releases: 2.8.3, 2.6.14, 2.7.10...
3 years ago

Release v2.3.7

Important

  • Please review the v2.3.0 release notes for important updates/ breaking changes if you are upgrading from a v2.2 release.

The following versions are now latest and stable:

Type Rancher Version Docker Tag Helm Repo Helm Chart Version
Latest v2.4.2 rancher/rancher:latest server-charts/latest v2.4.2
Stable v2.3.7 rancher/rancher:stable server-charts/stable v2.3.7

Please review our version documentation for more details on versioning and tagging conventions.

Features and Enhancements

  • Monitoring Enhancements:
    • Remote read/write configs [#20624]: Ability to remote read/write. This allows additional integrations with remote storage solutions.
    • Configuration of livenessProbe settings [#23983]: - Ability to configure the livenessProve settings
  • Ability to use nodelocal DNS [#25811]

Experimental Features

We have the ability to turn on and off experimental components inside Rancher. You can manage feature flags through our UI. Alternatively, you can refer to our docs on how to turn on the features when starting Rancher.

Major Bugs Fixed Since v2.3.6

  • Fixed a bug relating to API fields appearing editable but are not for EKS Clusters [#24652]
  • Fixed a bug causing 400s while allocating an EIP in Aliyun [#15847]
  • Fixed a bug causing imported clusters to be unable to take labels [#26496]
  • Fixed a bug causing monitoring pod's memory and cpu to be doubled [#26594]
  • Fixed a bug causing EKS clusters to now fully rotate their AWS Credentials [#25835]
  • Fixed a bug causing EKS cluster upgrades to stick if run from the AWS console [#24171]
  • Fixed a bug in the API where projectId was required for apps in Project level Catalog [#24371]

Other notes

Air Gap Installations and Upgrades

In v2.3.0, an air gap install no longer requires mirroring the systems chart git repo. Please follow the directions on how to install Rancher to use the packaged systems chart.

Known Major Issues

  • NGINX ingress controller 0.25.0 doesn't work on CPUs without SSE4.2 instruction set support [#23307]
  • Windows Limitations - There are a couple of known limitations with Windows due to upstream issues:
    • Windows pods cannot access the Kubernetes API when using VXLAN (Overlay) backend for the flannel network provider. The workaround is to use the Host Gateway (L2bridge) backend for the flannel network provider. [#20968]
    • Logging only works on Host Gateway (L2bridge) backend for the flannel network provider [#20510]
  • HPA Limitation - HPA UI doesn't work on GKE clusters as GKE doesn't support the v2beta2.autoscaling API [#22292]
  • Hardening Guide Limitations - If you have used Rancher's hardening guide, there are some known issues
    • kubectl in UI doesn't work [#19439]
    • Pipelines don't work [#22844]
  • Adding taints to existing node templates from an upgraded setup will not be applied unless a reconcile is triggered on the cluster. When scaling up/down worker nodes, no reconcile is triggered, but scaling up/down either control plane/etcd nodes or editing a cluster (like upgrading to the latest Kubernetes version) would update to support taints on the nodes. [#22672]
  • Cluster alerting and logging can get stuck in Updating state after upgrading Rancher. Workaround steps are provided in the issue [21480]
  • If you have Rancher cluster with OpenStack cloud provider having LoadBalancer set, and the cluster was provisioned on version 2.2.3 or less, the upgrade to the Rancher version v2.2.4 and up will fail. Steps to mitigate can be found in the comment to [20699]
  • In clusters that have a Kubernetes cloud provider configured and have agents registered with hostname or FQDN (so not valid IP addresses), kube-proxy will fail to start. This can be checked in the API output for the node (customConfig -> address or internalAddress) [RKE#1725]
  • Rancher log collection format changed when upgrading the Fluentd Kubernetes metadata plugin. A json log is no longer parsed and put into the log as top level keys. Issue to optionally bring back this behavior[23646]

Versions

Images

  • rancher/rancher:v2.3.7
  • rancher/rancher-agent:v2.3.7

Tools

Kubernetes

Upgrades and Rollbacks

Rancher supports both upgrade and rollback. Please note the version you would like to upgrade or rollback to change the Rancher version.

Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.

Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager.

Important: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected.

Don't miss a new rancher release

NewReleases is sending notifications on new releases.