Release v2.13.0
Important: If you are using Active Directory Federation Service (AD FS), upgrading to Rancher v2.10.1 or later may cause issues with authentication, requiring manual intervention. These issues are due to the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata. They can be corrected by either of two methods:
- Updating the Relying Party Trust information from federation metadata (Relying Party Trust -> Update from Federation Metadata...)
- Directly adding the certificate (Relying Party Trust -> Properties -> Signature tab -> Add -> Select the certificate).
For more information see #48655.
Important: Rancher-Istio has been deprecated in Rancher v2.12.0; turn to the SUSE Application Collection build of Istio for enhanced security (included in SUSE Rancher Prime subscriptions). Detailed information can be found in this announcement.
Rancher v2.13.0 is the latest minor release of Rancher. This is a Community version release that introduces new features, enhancements, and various updates.
Rancher General
Features and Enhancements
- Rancher now supports Kubernetes v1.34. See #51252 for information on Rancher support for Kubernetes v1.34. You can view the upstream Kubernetes changelogs for v1.34 for a complete list of changes.
Behavior Changes
- Official support for Kubernetes v1.31 and older versions has been removed. You can no longer provision new RKE2 or K3s clusters using the Kubernetes versions that fall outside of the supported range (v1.32 - v1.34). See #51253.
Major Bug Fixes
- Fixed an issue where the cluster-scoped
v1.ext.cattle.ioAPIService was not correctly deleted during a Rancher Helm uninstall, leading to a "dangling" API service. See #51976.
Rancher App (Global UI)
Features and Enhancements
- Introduced an optimized view for the Cluster List on the Home Page, powered by Server-Side Pagination (SSP). This enhancement improves the load time and UI experience for users managing a large number of Kubernetes clusters. See #15569 and #15570.
- Introduced support for dynamic content on the Rancher Manager Home screen, allowing timely information, such as product announcements, new releases, feature highlights, and relevant updates to be displayed upon logging in. See #15342.
- Rancher Default and Prime themes have been refreshed. See #15166.
- Introduced a Cron Editor component, providing a user interface for defining and managing CronJob schedules. See #14341.
Behavior Changes
- The Rancher UI for provisioning hosted Kubernetes clusters (AKS, EKS, and GKE) has been updated to align with the new Cluster Provisioning v2 (kev2) framework. This change replaces the reliance on the older kontainerdriver (kev1) resources to determine which hosted providers are available for display. The UI now uses a new setting to manage the visibility of these providers, ensuring consistency and future compatibility. See #15391.
- Rancher's session inactivity logic has been moved from the UI to the backend. A new session TTL setting
auth-user-session-idle-ttl-minuteswas introduced, and it sets the maximum time a user is allowed to be idle within a browser session before the session expires. To enable the idle timeout feature, you must supplyauth-user-session-idle-ttl-minutesand set it to a value lower than the existing absolute session limit,auth-user-session-ttl-minutes. This new backend-driven mechanism, along with its associated TTL setting, replaces the previous session timeout configuration in the UI under Global Settings > Performance. See #12552.
Major Bug Fixes
- Fixed an issue where when registering a Rancher Manager instance with SCC, the Status field in the Rancher UI failed to update. See #14985.
- Fixed an issue where when attempting to register Rancher Manager with an invalid SUSE Customer Center (SCC) registration code resulted in a misleading error message and the UI would incorrectly suggest connection issues instead of stating that the Registration Code was invalid. See #14940.
- Fixed an issue affecting RKE2 clusters provisioned with the Nutanix Node Driver, where editing any cluster setting (like changing the Kubernetes version) would incorrectly modify and clear the
vmNetworkfield in the underlying Nutanix machine pool configuration (setting it to null). This caused the nodes to start reprovisioning and fail with anutanix-vm-network cannot be emptyerror. See #15269.
Known Issues
- In an air-gapped environment, when attempting to access the legacy v3 API-UI page directly via the host URL (
<RANCHER-SERVER-URL>/v3), the page fails to load and displays a blank page. See #52790.
Authentication
Features and Enhancements
- In environments using GitHub, you can configure the new GitHub App authentication provider in Rancher, which allows users to authenticate against a GitHub Organization account using a dedicated GitHub App. This new provider runs alongside the existing standard GitHub authentication provider, offering increased security and better management of permissions based on GitHub Organization teams. See #50517.
- Rancher supports the ability to configure OIDC Single Logout (SLO). See #49013.
- Rancher no longer stores user tokens for the Generic OIDC and Cognito authentication providers. An automatic cleaner has been implemented to remove any previously stored tokens for Generic OIDC and Cognito during the upgrade process. See #52136.
- Rancher now provides Terraform resource support for managing the Generic OIDC authentication provider through Infrastructure as Code (IaC). This enhancement allows users to programmatically configure, enable, and disable Generic OIDC authentication, including setting endpoints, client secrets, and claims mappings, directly using the
rancher2_auth_config_oidcresource withinterraform-provider-rancher2. See #51059.
Cluster Provisioning
Features and Enhancements
- Rancher now includes initial support for IPv6, providing the foundational capabilities needed to manage clusters using IPv6 addressing. You can deploy Rancher on IPv6-only or dual-stack clusters, and you can provision IPv6-only or dual-stack clusters on Amazon EC2 or DigitalOcean using node drivers, as well as create custom clusters with IPv6 or dual-stack support. See #49689.
- Rancher now uses Rancher Turtles as the default component for providing Cluster API (CAPI) controllers and Custom Resource Definitions (CRDs) necessary for RKE2 and K3s cluster provisioning (v2prov). This change replaces the previous Rancher Provisioning component. Upon upgrade, the Rancher Provisioning chart is automatically uninstalled from the Rancher management cluster and replaced with the Rancher Turtles chart. See #52254.
Major Bug Fixes
- Fixed an issue where Standard Users assigned the Cluster Owner role on a downstream cluster were unable to view or restore existing etcd snapshots via the Rancher UI. Cluster Owners now have the correct permissions to see and restore snapshots as expected. See #52307.
Known Issues
- Cluster provisioning is not working as expected in air-gapped Rancher setups due to
capi-controller-managernot coming up Active. See #52816. To resolve this issue, refer to this workaround.
Rancher Webhook
Major Bug Fixes
- Fixed an issue related to Project Resource Quotas that led to the incorrect calculation of
usedLimitvalues and subsequent admission webhook errors. The webhook validation now correctly checks the new desired quota state against the calculated usage, preventing errors when saving correct limits.usedLimitis dropped when a project is new and has no namespaces to prevent persistence of stale or user-provided, bogus values. See #49041.
K3s Provisioning
Known Issues
- When attempting to provision an IPv6-only K3s cluster (either Custom or Node-Driver) with IPv6 CIDRs, the cluster becomes stuck in a provisioning state and does not become Active. This issue will be addressed in RKE2/K3s November 2025 releases that will be made available via corresponding KDM release. See #51990.
RKE2 Provisioning
Known Issues
- When provisioning a Custom RKE2 cluster where the nodes are configured with IPv6-only addresses, the cluster fails to provision correctly. Specifically, the
rke2-serverservice on the etcd-only nodes crashes repeatedly with a fatal error:runtimes: failed to get runtime classes. As a result, the etcd node is continually seen flipping between Waiting for Node Ref and Reconciling status, preventing the cluster from reaching an Active state. Fixes will be delivered via the November RKE2/K3s KDM release. See #51851.
Backup/Restore
Known Issues
- When performing a rollback from Rancher v2.13.0 to v2.12.3 using the backup and restore operator (BRO), the restore does not complete successfully. See #844. To work around this issue, you must scale down your Rancher deployment and uninstall the Webhook chart before performing the restore. For details, refer to this Knowledge Base article.
Continuous Delivery (Fleet)
Major Bug Fixes
- Fixed an issue where attempting to uninstall Rancher's Helm chart failed because recent Fleet updates introduced new cronjob resources, and when Rancher adopted the latest version of Fleet, the Rancher uninstall process was not updated to include the cleanup of these new resources. See #51478. To resolve this issue in previously affected versions, apply the workaround detailed in this Knowledge Base article.
Install/Upgrade Notes
- If you're installing Rancher for the first time, your environment must fulfill the installation requirements.
Important: Rancher now requires the cluster it runs on to have the Kubernetes API Aggregation Layer enabled. This is because Rancher extends Kubernetes with additional APIs by registering its own extension API server. Please note that all versions of Kubernetes supported in this Rancher versions K8s distributions (RKE2/K3s) will have the aggregation layer configured and enabled by default. Refer to the Extension API Server documentation and #50400 for more information.
Important: Rancher Kubernetes Engine (RKE/RKE1) has reached end of life as of July 31, 2025. Rancher versions 2.12.0 and later no longer support provisioning or managing downstream RKE1 clusters. We recommend replatforming RKE1 clusters to RKE2 to ensure continued support and security updates. Learn more about the transition here.
Rancher now has a pre-upgrade validation check for RKE1 resources which fails and lists the RKE1 resources if present. Refer to the RKE1 Resource Validation and Upgrade Requirements documentation and #50286 for more information.
Important: It is crucial that you review the available disk space on your nodes and plan accordingly before upgrading to Rancher v2.12.0 and later to avoid potential disk pressure and pod eviction issues. For additional information refer to the UI Server Side Pagination - Disk Space documentation.
Important: Rancher now has an enablement option called
AUDIT_LOG_ENABLEDfor API Audit Logs for a Rancher installation. In Rancher versions 2.11.x and earlier, only theAUDIT_LEVELcould be set and the default log level (0) would disable the audit log. In Rancher versions 2.12.x and later, the default log level (0) now only contains the log request and response metadata, and can be set when configuringAUDIT_LOG_ENABLED. If installing or upgrading via Helm you can enable the API Audit Logs and specify the log level by applying the following setting to your Helm command:--set auditLog.enabled=true --set auditLog.level=0. See the Enabling the API Audit Log to Record System Events documentation and #48941.
Changes in Image Artifacts
Image artifact digests are renamed in Rancher v2.12.0, v2.11.4 and v2.10.8. Up until this change, separate image digests files for each operating system and architecture have been maintained for compatibility reasons. With this change, only one file for each operating system is to be provided:
- The
rancher-images-digests-linux-amd64.txtandrancher-images-digests-linux-arm64.txtfiles are to be renamed torancher-images-digests-linux.txt. - The
rancher-images-digests-windows-ltsc2019.txtandrancher-images-digests-windows-ltsc2022.txtfiles are to be renamed torancher-images-digests-windows.txt.
Upgrade Requirements
- Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
- Helm version requirements:
- To manage Rancher 2.12.x and later, you must upgrade your Helm client to version 3.18 or newer.
- This change is required to reflect the addition of Kubernetes 1.33 support with this release.
- Currently, the official Helm Version Support Policy dictates that only Helm 3.18 supports the proper Kubernetes version range for Rancher 2.12.
- CNI requirements:
- For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #28840.
- When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #1317.
- Requirements for air-gapped environments:
- When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
NO_PROXY. See the documentation and issue #2725. - When installing Rancher with Docker in an air-gapped environment, you must supply a custom
registries.yamlfile to thedocker runcommand, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #28969.
- When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
- Requirements for general Docker installs:
- When starting the Rancher Docker container, you must use the
privilegedflag. See documentation. - When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #33685.
- When starting the Rancher Docker container, you must use the
Versions
Please refer to the README for the latest and stable Rancher versions.
Please review our version documentation for more details on versioning and tagging conventions.
Images
- rancher/rancher:v2.13.0
Tools
- CLI - v2.13.0
Kubernetes Versions for RKE2/K3s
- v1.34.1 (Default)
- v1.33.5
- v1.32.9
Rancher Helm Chart Versions
In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #32294.
Long-standing Known Issues
Long-standing Known Issues - Rancher General
- Rancher v2.12.2:
- The SUSE Customer Center (SCC) system view has a known issue being investigated regarding duplicate Rancher Manager registrations. See rancher/scc-operator #38.
Long-standing Known Issues - Cluster Provisioning
-
Not all cluster tools can be installed on a hardened cluster.
-
Rancher v2.8.1:
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
[ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by runningkubectl edit clusters.cluster clustername -n fleet-defaultand setspec.unpausedtofalse. See #43735.
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
-
Rancher v2.7.2:
- If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #8524.
Long-standing Known Issues - RKE2 Provisioning
- Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.
- Rancher v2.7.6:
Long-standing Known Issues - K3s Provisioning
- Rancher v2.7.6:
- Rancher v2.7.2:
- Clusters remain in an
Updatingstate even when they contain nodes in anErrorstate. See #39164.
- Clusters remain in an
Long-standing Known Issues - Rancher App (Global UI)
- Rancher v2.12.1:
- When a standard user with the Cluster Owner role attempts to edit an Azure or AKS cluster, the Machine Pools section shows an error
Cannot read properties of undefined.... As a workaround, standard users must manually add their cloud credentials to create, edit, and manage Azure or AKS clusters. See #15241.
- When a standard user with the Cluster Owner role attempts to edit an Azure or AKS cluster, the Machine Pools section shows an error
- Rancher v2.10.0:
- After deleting a Namespace or Project in the Rancher UI, the Namespace or Project remains visible. As a workaround, refresh the page. See #12220.
- Rancher v2.9.2:
- Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #11922.
- Rancher v2.7.7:
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
_in theCluster Namefield. See #9416.
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
Long-standing Known Issues - Hosted Rancher
- Rancher v2.7.5:
- The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #8524.
Long-standing Known Issues - EKS
- Rancher v2.7.0:
- EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #39392.
Long-standing Known Issues - Authentication
- Rancher v2.9.0:
- There are some known issues with the OpenID Connect provider support:
- When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #46104.
- When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #46105.
- When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project:
[projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg". However, the project is still created. See #46106.
- There are some known issues with the OpenID Connect provider support:
Long-standing Known Issues - Rancher Webhook
- Rancher v2.7.2:
- A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
- If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #40816.
- A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
Long-standing Known Issues - Virtualization Management (Harvester)
- Rancher v2.7.2:
- If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #3750.
Long-standing Known Issues - Backup/Restore
-
When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.