Release v2.12.1
Important: If you are using Active Directory Federation Service (AD FS), upgrading to Rancher v2.10.1 or later may cause issues with authentication, requiring manual intervention. These issues are due to the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata. They can be corrected by either of two methods:
- Updating the Relying Party Trust information from federation metadata (Relying Party Trust -> Update from Federation Metadata...)
- Directly adding the certificate (Relying Party Trust -> Properties -> Signature tab -> Add -> Select the certificate).
For more information see #48655.
Important: Rancher-Istio is now deprecated in Rancher v2.12.0; turn to the SUSE Application Collection build of Istio for enhanced security (included in SUSE Rancher Prime subscriptions). Detailed information can be found in this announcement.
Rancher v2.12.1 is the latest patch release of Rancher. This is a Community and Prime version release that introduces maintenance updates and bug fixes. To learn more about Rancher Prime, see our page on the Rancher Prime Platform.
Changes Since v2.12.0
See the full list of changes.
Security Fixes for Rancher Vulnerabilities
This release addresses the following Rancher security issues:
- POSTs to the Rancher API endpoints are now limited to 1 Mi; this is configurable through the settings if you need a larger limit. The Rancher authentication endpoints are configured independently of the main public API (as you might need bigger payloads in the other API endpoints). Suppose you need to increase the maximum allowed payload for authentication. In that case, you can set the environment variable
CATTLE_AUTH_API_BODY_LIMIT
to a quantity, e.g., 2 Mi, which would allow larger payloads for the authentication endpoints. For more information, see CVE-2024-58259. - Following a recent change excluding Helm values files from bundles, an edge case subsisted where the values files referenced in
fleet.yaml
with your directory name (e.g.,my-dir/values.yaml
instead ofvalues.yaml
) would not be excluded, which would potentially expose confidential data in bundle resources. Helm values files are now excluded from bundle resources regardless of how you reference them. For more information, see CVE-2023-32198.
For more details, see the Security Advisories and CVEs page in Rancher's documentation or in Rancher's GitHub repo.
Rancher General
Features and Enhancements
- (Prime Only) In Rancher v2.12.1, SUSE Rancher Prime customers can now register Rancher Manager with SUSE Customer Center (SCC). This registration helps you track the use of your organization's subscriptions and deployed Rancher Manager instances.
- If you are registering Rancher Manager offline, an SCC Organization Admin must get your registration certificate, as regular users do not have access.
- (Prime Only) Support for the RKE2 STIG profile has been added to Rancher Compliance App. A script and the necessary manifest is provided in SCC to help users meet STIG compliance requirements.
Rancher App (Global UI)
Known Issues
- When a standard user with the Cluster Owner role attempts to edit an Azure or AKS cluster, the Machine Pools section shows an error
Cannot read properties of undefined...
. As a workaround, standard users must manually add their cloud credentials to create, edit, and manage Azure or AKS clusters. See #15241. - When registering a Rancher Manager instance with SCC, the Status field in the Rancher UI can fail to update. Refresh the page to restore functionality. See #14985.
Cluster Provisioning
Known Issues
- When a change is made to a cluster configuration using a Nutanix Node Driver, Rancher incorrectly reprovisions Nutanix nodes, even though the NutanixConfig configuration itself has not been changed. See #51685.
Major Bug Fixes
- An issue affecting v2.11.4 / v2.12.0 was fixed that could cause a memory leak, high CPU usage, and excessive logging, leading to increased costs for users of cloud services like AWS CloudWatch. See #51394, #51395, and #51377.
- Fixed an issue where a standard user assigned as a cluster owner is unable to view, manage, and delete their downstream clusters. See #51589.
- Fixed an issue where, when provisioning KEv1 cluster drivers, such as LKE (Linode), OKE (Oracle) or any other 3rd-party Cluster Kontainer Drivers, the downstream cluster comes up
active
initially, but then the cluster state continuously toggles betweenactive
andupdating
. See #51487.
Install/Upgrade Notes
- If you're installing Rancher for the first time, your environment must fulfill the installation requirements.
Important: Rancher now requires the cluster it runs on to have the Kubernetes API Aggregation Layer enabled. This is because Rancher extends Kubernetes with additional APIs by registering its own extension API server. Please note that all versions of Kubernetes supported in this Rancher versions K8s distributions (RKE2/K3s) will have the aggregation layer configured and enabled by default. Refer to the Extension API Server documentation and #50400 for more information.
Important: Rancher Kubernetes Engine (RKE/RKE1) has reached end of life as of July 31, 2025. Rancher versions 2.12.0 and later no longer support provisioning or managing downstream RKE1 clusters. We recommend replatforming RKE1 clusters to RKE2 to ensure continued support and security updates. Learn more about the transition here.
Rancher now has a pre-upgrade validation check for RKE1 resources which fails and lists the RKE1 resources if present. Refer to the RKE1 Resource Validation and Upgrade Requirements documentation and #50286 for more information.
Important: It is crucial that you review the available disk space on your nodes and plan accordingly before upgrading to Rancher v2.12.0 to avoid potential disk pressure and pod eviction issues. For additional information refer to the UI Server Side Pagination - Disk Space documentation.
Important: Rancher now has an enablement option called
AUDIT_LOG_ENABLED
for API Audit Logs for a Rancher installation. In Rancher versions 2.11.x and earlier, only theAUDIT_LEVEL
could be set and the default log level (0
) would disable the audit log. In Rancher versions 2.12.x and later, the default log level (0
) now only contains the log request and response metadata, and can be set when configuringAUDIT_LOG_ENABLED
. If installing or upgrading via Helm you can enable the API Audit Logs and specify the log level by applying the following setting to your Helm command:--set auditLog.enabled=true --set auditLog.level=0
. See the Enabling the API Audit Log to Record System Events documentation and #48941.
Changes in Image Artifacts
Image artifact digests are renamed in Rancher v2.12.0, v2.11.4 and v2.10.8. Up until this change, separate image digests files for each operating system and architecture have been maintained for compatibility reasons. With this change, only one file for each operating system is to be provided:
- The
rancher-images-digests-linux-amd64.txt
andrancher-images-digests-linux-arm64.txt
files are to be renamed torancher-images-digests-linux.txt
. - The
rancher-images-digests-windows-ltsc2019.txt
andrancher-images-digests-windows-ltsc2022.txt
files are to be renamed torancher-images-digests-windows.txt
.
Upgrade Requirements
- Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
- Helm version requirements:
- To manage Rancher 2.12.x and later, you must upgrade your Helm client to version 3.18 or newer.
- This change is required to reflect the addition of Kubernetes 1.33 support with this release.
- Currently, the official Helm Version Support Policy dictates that only Helm 3.18 supports the proper Kubernetes version range for Rancher 2.12.
- CNI requirements:
- For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #28840.
- When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #1317.
- Requirements for air-gapped environments:
- When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
NO_PROXY
. See the documentation and issue #2725. - When installing Rancher with Docker in an air-gapped environment, you must supply a custom
registries.yaml
file to thedocker run
command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #28969.
- When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
- Requirements for general Docker installs:
- When starting the Rancher Docker container, you must use the
privileged
flag. See documentation. - When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #33685.
- When starting the Rancher Docker container, you must use the
Versions
Please refer to the README for the latest and stable Rancher versions.
Please review our version documentation for more details on versioning and tagging conventions.
Images
- rancher/rancher:v2.12.1
Tools
- CLI - v2.12.1
Kubernetes Versions for RKE2/K3s
- v1.33.3 (Default)
- v1.32.7
- v1.31.11
Rancher Helm Chart Versions
In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0
. See #32294.
Previous Rancher Behavior Changes
Previous Rancher Behavior Changes - Rancher General
- Rancher v2.12.0:
-
Kubernetes v1.30 is no longer supported. Before upgrading to Rancher v2.12.0, ensure all clusters are running Kubernetes v1.31 or later. See #49679.
-
The feature flag
ui-sql-cache
(Server-Side Pagination) is now enabled by default in Rancher. Please refer to the UI Server-Side Pagination document for more information.Beginning with Rancher v2.12.0, UI Server-Side Pagination is enabled by default to provide significant performance improvements across the Rancher UI. This feature relies on a caching mechanism that introduces a new requirement for ephemeral disk space on your cluster nodes.
This cache, an internal SQLite database, is stored within the container's filesystem. This affects the nodes running the Rancher server pods (rancher in the cattle-system namespace on the local cluster) and the nodes running the Rancher agent pods (cattle-cluster-agent in the cattle-system namespace on all downstream clusters).
The amount of disk space required is dynamic and depends on the quantity and size of Kubernetes resources visualized in the UI. As a guideline, the cache may consume approximately twice the size of the raw Kubernetes objects it stores. For instance, internal tests showed that caching 5000 ConfigMaps, totaling 50 MB, consumed 81 MB of disk space. For a conservative, high-level estimate, you can plan for the available disk space on each relevant node to be at least twice the size of your etcd snapshot. For most production environments, ensuring a few extra gigabytes of storage are available on the relevant nodes is a safe starting point.
It is crucial that you review the available disk space on your nodes and plan accordingly before upgrading to this version to avoid potential disk pressure and pod eviction issues.
This update has introduced limitations which are outlined in the UI Server Side Pagination documentation. See #48691 and #12975 for more information.
-
Previous Rancher Behavior Changes - Logging
- Rancher v2.12.0:
- Rancher now has an enablement option called
AUDIT_LOG_ENABLED
for API Audit Logs for a Rancher installation. In Rancher versions 2.11.x and earlier, only theAUDIT_LEVEL
could be set and the default log level (0
) would disable the audit log. In Rancher versions 2.12.x and later, the default log level (0
) now only contains the log request and response metadata, and can be set when configuringAUDIT_LOG_ENABLED
. If installing or upgrading via Helm you can enable the API Audit Logs and specify the log level by applying the following setting to your Helm command:--set auditLog.enabled=true --set auditLog.level=0
. See the Enabling the API Audit Log to Record System Events documentation and #48941.
- Rancher now has an enablement option called
Previous Rancher Behavior Changes - Cluster Provisioning
- Rancher v2.12.0:
-
Rancher's
system-upgrade-controller
app is now managed by thesystemchart
handler in downstream provisioned RKE2/K3s clusters. For additional information refer to this comment and see #47737. -
Rancher v2.12.0 introduces changes in Custom Resource Definition (CRD) validations for
dynamicschemas.management.cattle.io
and dynamically generated CRDs:DynamicSchema
dynamicschemas.management.cattle.io
This CRD had a generic schema that allowed any field to be set. It has been updated to only allow the expected fields.
This is not a user-facing CRD and is used internally by rancher.
InfrastructureMachine CRDs
These are the CAPI InfrastructureMachine CRDs defined for the Rancher Cluster API (CAPI) infrastructure provider. They are dynamically generated and are named<name>machines.rke-machine.cattle.io
, where<name>
is derived from the node driver used to provision machines with a given infrastructure provider. Each active node driver has an associated InfrastructureMachine CRD.InfrastructureMachine objects are generated automatically by Rancher from other configuration objects.
The following validations were changed in this CRD schema:
spec.common.cloudCredentialSecretName
- Value must be 317 characters long or less (
<namespace>:<secretname>
).
- Value must be 317 characters long or less (
spec.common.labels
- Label values are no longer allowed to take a
null
value.
- Label values are no longer allowed to take a
spec.common.taints
, in each taint object:- Fields
effect
andkey
are now required. - Field
timeAdded
is now required to be in thedate-time
format. - All fields are no longer allowed to take a
null
value.
- Fields
status.addresses
, in each address object:- Fields
address
andtype
are now required and are no longer allowed to take anull
value. - Field
type
must take one of the following values:Hostname
,ExternalIP
,InternalIP
,ExternalDNS
orInternalDNS
. - Field
address
must be between 1 and 256 characters long.
- Fields
status.conditions
, in each condition object:- Fields
status
andtype
are now required. - No fields are allowed to take a
null
value anymore.
- Fields
status
- No fields are allowed to take a
null
value anymore.
- No fields are allowed to take a
InfrastructureMachineTemplate CRDs
These are the CAPI InfrastructureMachineTemplate CRDs defined for the Rancher CAPI infrastructure provider. They are dynamically generated and are named<name>machinetemplates.rke-machine.cattle.io
, where<name>
is derived from the node driver used to provision machines with a given infrastructure provider. Each active node driver has an associated InfrastructureMachineTemplate CRD.InfrastructureMachineTemplate objects are generated automatically by Rancher from other configuration objects.
spec.template.spec.common.cloudCredentialSecretName
- Value must be 317 characters long or less (
<namespace>:<secretname>
).
- Value must be 317 characters long or less (
spec.template.spec.common.labels
- Label values are no longer allowed to take a
null
value.
- Label values are no longer allowed to take a
spec.template.spec.common.taints
, in each taint object:- Fields
effect
andkey
are now required. - Field
timeAdded
is now required to be in thedate-time
format. - No fields are allowed to take a
null
value anymore.
- Fields
See #49402 for more information.
-
Previous Rancher Behavior Changes - Continuous Delivery (Fleet)
- Rancher v2.12.0:
- A migration patching service account is removed, meaning the image
rancher/kubectl
is no longer needed. See fleet#3601.
- A migration patching service account is removed, meaning the image
Long-standing Known Issues
Long-standing Known Issues - Cluster Provisioning
-
Not all cluster tools can be installed on a hardened cluster.
-
Rancher v2.12.0:
- After upgrading Rancher from v2.11.x to v2.12.0, all imported clusters created by directly instantiating
cluster.provisioning.cattle.io
object fail to reconnect to Rancher. For a solution regarding affected imported clusters, please refer to this comment. Note that since v2.11.0 imported clusters are created viacluster.management.cattle.io
instead, see release noted #13151. Additionally, creating custom resources directly is not an officially supported method of creating imported clusters. See #51066.
- After upgrading Rancher from v2.11.x to v2.12.0, all imported clusters created by directly instantiating
-
Rancher v2.8.1:
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
[ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again
. As a workaround, you can unpause the cluster by runningkubectl edit clusters.cluster clustername -n fleet-default
and setspec.unpaused
tofalse
. See #43735.
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
-
Rancher v2.7.2:
- If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #8524.
Long-standing Known Issues - RKE2 Provisioning
- Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
- Rancher v2.7.6:
Long-standing Known Issues - K3s Provisioning
- Rancher v2.7.6:
- Rancher v2.7.2:
- Clusters remain in an
Updating
state even when they contain nodes in anError
state. See #39164.
- Clusters remain in an
Long-standing Known Issues - Rancher App (Global UI)
- Rancher v2.10.0:
- After deleting a Namespace or Project in the Rancher UI, the Namespace or Project remains visible. As a workaround, refresh the page. See #12220.
- Rancher v2.9.2:
- Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #11922.
- Rancher v2.7.7:
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
_
in theCluster Name
field. See #9416.
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
Long-standing Known Issues - Hosted Rancher
- Rancher v2.7.5:
- The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #8524.
Long-standing Known Issues - EKS
- Rancher v2.7.0:
- EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #39392.
Long-standing Known Issues - Authentication
- Rancher v2.9.0:
- There are some known issues with the OpenID Connect provider support:
- When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #46104.
- When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #46105.
- When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project:
[projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg"
. However, the project is still created. See #46106.
- There are some known issues with the OpenID Connect provider support:
Long-standing Known Issues - Rancher Webhook
- Rancher v2.7.2:
- A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
- If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #40816.
- A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
Long-standing Known Issues - Virtualization Management (Harvester)
- Rancher v2.7.2:
- If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #3750.
Long-standing Known Issues - Backup/Restore
-
When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
Long-standing Known Issues - Logging
- Rancher v2.12.0:
- When the Rancher API Audit Log is enabled, Rancher does not validate new AWS Cloud Credentials to make sure they are valid. See #51079.