Release v2.13.2
Important: If you are using Active Directory Federation Service (AD FS), upgrading to Rancher v2.10.1 or later may cause issues with authentication, requiring manual intervention. These issues are due to the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata. For more information see #48655. These issues can be corrected by either of two methods:
- Updating the Relying Party Trust information from federation metadata (Relying Party Trust -> Update from Federation Metadata...)
- Directly adding the certificate (Relying Party Trust -> Properties -> Signature tab -> Add -> Select the certificate).
Rancher v2.13.2 is the latest patch release of Rancher. This is a
Community version release that introduces maintenance updates and bug
fixes.
For more information on new features in the general minor release, see
the v2.13.0 release
notes.
Rancher App (Global UI)
Major Bug Fixes
-
Fixed an issue where the selection of pods in the Rancher UI
(Workloads > Pods) would reset when the status of a pod changed.
See #16094. -
Added additional improvements to reduce the amount of failed network
requests when fetching resources shown in lists in the Rancher UI by
reducing the length of the URL used in the request. This failure was
more likely to happen when viewing resources in the local cluster as
an administrator. See
#16216.
Install/Upgrade Notes
If you’re installing Rancher for the first time, your environment must
fulfill the installation
requirements.
Important: Chart name change for Rancher Prime. The chart name change introduced in Rancher Prime v2.13.1 has been reverted. The chart name
ranchershould be used for all installations and upgrades. As an example, the installation command is nowhelm install rancher rancher-prime/rancher.
Important: Rancher now requires the cluster it runs on to have the Kubernetes API Aggregation Layer enabled. This is because Rancher extends Kubernetes with additional APIs by registering its own extension API server. Please note that all versions of Kubernetes supported in this Rancher versions K8s distributions (RKE2/K3s) will have the aggregation layer configured and enabled by default. Refer to the Extension API Server documentation and #50400 for more information.
Important: Rancher Kubernetes Engine (RKE/RKE1) has reached end of life as of July 31, 2025. Rancher versions 2.12.0 and later no longer support provisioning or managing downstream RKE1 clusters. We recommend replatforming RKE1 clusters to RKE2 to ensure continued support and security updates. Learn more about the transition here.
Rancher now has a pre-upgrade validation check for RKE1 resources which fails and lists the RKE1 resources if present. Refer to the RKE1 Resource Validation and Upgrade Requirements documentation and #50286 for more information.
Important: It is crucial that you review the available disk space on your nodes and plan accordingly before upgrading to Rancher v2.12.0 and later to avoid potential disk pressure and pod eviction issues. For additional information refer to the UI Server Side Pagination - Disk Space documentation.
Important: Rancher now has an enablement option called
AUDIT_LOG_ENABLEDfor API Audit Logs for a Rancher installation. In Rancher versions 2.11.x and earlier, only theAUDIT_LEVELcould be set and the default log level (0) would disable the audit log. In Rancher versions 2.12.x and later, the default log level (0) now only contains the log request and response metadata, and can be set when configuringAUDIT_LOG_ENABLED. If installing or upgrading via Helm you can enable the API Audit Logs and specify the log level by applying the following setting to your Helm command:--set auditLog.enabled=true --set auditLog.level=0. See the Enabling the API Audit Log to Record System Events documentation and #48941.
Changes in Image Artifacts
Image artifact digests are renamed in Rancher v2.12.0, v2.11.4 and
v2.10.8. Up until this change, separate image digests files for each
operating system and architecture have been maintained for compatibility
reasons. With this change, only one file for each operating system is to
be provided:
-
The
rancher-images-digests-linux-amd64.txtand
rancher-images-digests-linux-arm64.txtfiles are to be renamed to
rancher-images-digests-linux.txt. -
The
rancher-images-digests-windows-ltsc2019.txtand
rancher-images-digests-windows-ltsc2022.txtfiles are to be renamed
torancher-images-digests-windows.txt.
Upgrade Requirements
-
Creating backups: Create a
backup
before you upgrade Rancher. To roll back Rancher after an upgrade, you
must first back up and restore Rancher to the previous Rancher
version. Because Rancher will be restored to the same state as when
the backup was created, any changes post-upgrade will not be included
after the restore. -
Helm version requirements:
-
To manage Rancher 2.12.x and later, you must upgrade your Helm
client to version 3.18 or newer. -
This change is required to reflect the addition of Kubernetes 1.33
support with this release. -
Currently, the official Helm Version Support
Policy dictates that
only Helm 3.18 supports the proper Kubernetes version range for
Rancher 2.12.
-
-
CNI requirements:
-
For Kubernetes v1.19 and later, disable firewalld as it’s
incompatible with various CNI plugins. See
#28840. -
When upgrading or installing a Linux distribution that uses
nf_tables as the backend packet filter, such as SLES 15, RHEL 8,
Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later
to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See
Flannel #1317.
-
-
Requirements for air-gapped environments:
-
When using a proxy in front of an air-gapped Rancher instance, you
must pass additional parameters toNO_PROXY. See the
documentation
and issue
#2725. -
When installing Rancher with Docker in an air-gapped environment,
you must supply a customregistries.yamlfile to thedocker run
command, as shown in the K3s
documentation.
If the registry has certificates, then you’ll also need to supply
those. See
#28969.
-
Versions
Images
- rancher/rancher:v2.13.2
Tools
- CLI - v2.13.2
Kubernetes Versions for RKE2/K3s
-
v1.34.3 (Default)
-
v1.33.7
-
v1.32.11
Rancher Helm Chart Versions
In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many
Rancher Helm charts are named with a major version that starts with
100. This avoids simultaneous upstream changes and Rancher changes
from causing conflicting version increments. This also complies with
semantic versioning (SemVer), which is a requirement for Helm. You can
see the upstream version number of a chart in the build metadata, for
example: 100.0.0+up2.1.0. See
#32294.
Previous Rancher Behavior Changes
Previous Rancher Behavior Changes - Rancher General
-
Rancher v2.13.0:
- Official support for Kubernetes v1.31 and older versions has been
removed. You can no longer provision new RKE2 or K3s clusters using
the Kubernetes versions that fall outside of the supported range
(v1.32 - v1.34). See
#51253.
- Official support for Kubernetes v1.31 and older versions has been
Previous Rancher Behavior Changes - Rancher App (Global UI)
-
Rancher v2.13.0:
-
The Rancher UI for provisioning hosted Kubernetes clusters (AKS,
EKS, and GKE) has been updated to align with the new Cluster
Provisioning v2 (kev2) framework. This change replaces the reliance
on the older kontainerdriver (kev1) resources to determine which
hosted providers are available for display. The UI now uses a new
setting to manage the visibility of these providers, ensuring
consistency and future compatibility. See
#15391. -
Rancher’s session inactivity logic has been moved from the UI to the
backend. A new session TTL setting
auth-user-session-idle-ttl-minuteswas introduced, and it sets the
maximum time a user is allowed to be idle within a browser session
before the session expires. To enable the idle timeout feature, you
must supplyauth-user-session-idle-ttl-minutesand set it to a
value lower than the existing absolute session limit,
auth-user-session-ttl-minutes. This new backend-driven mechanism,
along with its associated TTL setting, replaces the previous session
timeout configuration in the UI under Global Settings >
Performance. See
#12552.
-
Future Rancher Behavior Changes
Retention Policy for Rancher App Charts
To improve repository performance, Rancher is introducing a lifecycle
management policy for charts available in the Apps feature of Rancher,
specifically in the "Rancher" repository.
-
The Policy: Rancher will transition from a cumulative model
(retaining all historical versions forever) to a retention model that
preserves chart versions for the seven (7) most recent Rancher minor
releases (approximately a 2.5-year window). -
Timeline - Rancher v2.13 & v2.14: Legacy chart versions (older
than the 7-version window) remain available. -
Rancher v2.15: This will be the first version to enforce the
policy. Versions falling outside the 7-version window and older than
two years will no longer be available.
Impact: This change is non-destructive for existing Rancher
installations. Historical versions will remain accessible but will not
be available in newer release branches once they age out of the
7-version window. You are advised to upgrade your applications before
upgrading to Rancher v2.15. Uninstallation after v2.15, and replacement
with an updated version, will still be possible.
Long-standing Known Issues
Long-standing Known Issues - Rancher General
-
Rancher v2.12.2:
- The SUSE Customer Center (SCC) system
view has a known issue being investigated regarding duplicate
Rancher Manager registrations. See rancher/scc-operator
#38.
- The SUSE Customer Center (SCC) system
Long-standing Known Issues - Cluster Provisioning
-
Not all cluster tools can be installed on a hardened cluster.
-
Rancher v2.13.0:
- Provisioning or importing an Amazon EKS downstream cluster fails
when the Rancher Server is running in an IPv6-only or dual-stack
environment. See
#52154.
- Provisioning or importing an Amazon EKS downstream cluster fails
-
Rancher v2.8.1:
- When you attempt to register a new etcd/controlplane node in a
CAPR-managed cluster after a failed etcd snapshot restoration, the
node can become stuck in a perpetual paused state, displaying the
error message
[ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again.
As a workaround, you can unpause the cluster by running
kubectl edit clusters.cluster clustername -n fleet-defaultand set
spec.unpausedtofalse. See
#43735.
- When you attempt to register a new etcd/controlplane node in a
-
Rancher v2.7.2:
- If you upgrade or update any hosted cluster, and go to Cluster
Management > Clusters while the cluster is still provisioning,
the Registration tab is visible. Registering a cluster that is
already registered with Rancher can cause data corruption. See
#8524.
- If you upgrade or update any hosted cluster, and go to Cluster
Long-standing Known Issues - RKE2 Provisioning
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream
provisioned K3s and RKE2 clusters may take longer to re-achieve
Active status after a migration. If you see that a downstream
cluster is still updating or in an error state immediately after a
migration, please let it attempt to resolve itself. This might take
up to an hour to complete. See
#34518 and
#42834.
- Due to the backoff logic in various components, downstream
-
Rancher v2.7.6:
Long-standing Known Issues - K3s Provisioning
-
Rancher v2.7.6:
-
Rancher v2.7.2:
- Clusters remain in an
Updatingstate even when they contain nodes
in anErrorstate. See
#39164.
- Clusters remain in an
Long-standing Known Issues - Rancher App (Global UI)
-
Rancher v2.12.1:
- When a standard user with the Cluster Owner role attempts to edit an
Azure or AKS cluster, the Machine Pools section shows an error
Cannot read properties of undefined…. As a workaround, standard
users must manually add their cloud credentials to create, edit, and
manage Azure or AKS clusters. See
#15241.
- When a standard user with the Cluster Owner role attempts to edit an
-
Rancher v2.10.0:
- After deleting a Namespace or Project in the Rancher UI, the
Namespace or Project remains visible. As a workaround, refresh the
page. See
#12220.
- After deleting a Namespace or Project in the Rancher UI, the
-
Rancher v2.9.2:
- Although system mode node pools must have at least one node, the
Rancher UI allows a minimum node count of zero. Inputting a zero
minimum node count through the UI can cause cluster creation to fail
due to an invalid parameter error. To prevent this error from
occurring, enter a minimum node count at least equal to the node
count. See
#11922.
- Although system mode node pools must have at least one node, the
-
Rancher v2.7.7:
- When creating a cluster in the Rancher UI it does not allow the use
of an underscore_in theCluster Namefield. See
#9416.
- When creating a cluster in the Rancher UI it does not allow the use
Long-standing Known Issues - Hosted Rancher
-
Rancher v2.7.5:
- The Cluster page shows the Registration tab when updating or
upgrading a hosted cluster. See
#8524.
- The Cluster page shows the Registration tab when updating or
Long-standing Known Issues - EKS
-
Rancher v2.7.0:
- EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be
upgraded. See
#39392.
- EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be
Long-standing Known Issues - Authentication
-
Rancher v2.9.0:
-
There are some known issues with the OpenID Connect provider
support:-
When the generic OIDC auth provider is enabled, and you attempt to
add auth provider users to a cluster or project, users are not
populated in the dropdown search bar. This is expected behavior as
the OIDC auth provider alone is not searchable. See
#46104. -
When the generic OIDC auth provider is enabled, auth provider
users that are added to a cluster/project by their username are
not able to access resources upon logging in. A user will only
have access to resources upon login if the user is added by their
userID. See
#46105. -
When the generic OIDC auth provider is enabled and an auth
provider user in a nested group is logged into Rancher, the user
will see the following error when they attempt to create a
Project:
projectroletemplatebindings.management.cattle.io is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "management.cattle.io" in the namespace "p-9t5pg".
However, the project is still created. See
#46106.
-
-
Long-standing Known Issues - Rancher Webhook
-
Rancher v2.7.2:
-
A webhook is installed in all downstream clusters. There are several
issues that users may encounter with this functionality:- If you rollback from a version of Rancher v2.7.2 or later, to a
Rancher version earlier than v2.7.2, the webhooks will remain in
downstream clusters. Since the webhook is designed to be 1:1
compatible with specific versions of Rancher, this can cause
unexpected behaviors to occur downstream. The Rancher team has
developed a
script
which should be used after rollback is complete (meaning after a
Rancher version earlier than v2.7.2 is running). This removes the
webhook from affected downstream clusters. See
#40816.
- If you rollback from a version of Rancher v2.7.2 or later, to a
-
Long-standing Known Issues - Virtualization Management (Harvester)
-
Rancher v2.13.1:
- When upgrading to Rancher v2.13.1 or higher while using Harvester
v1.6.1, users may encounter an issue with their Load Balancers in
downstream clusters using the Harvester Cloud Provider, and must
perform the following
workaround
to instruct Calico to not use any of the IP/interface managed by
kube-vip. See
#9767.
- When upgrading to Rancher v2.13.1 or higher while using Harvester
-
Rancher v2.7.2:
- If you’re using Rancher v2.7.2 with Harvester v1.1.1 clusters, you
won’t be able to select the Harvester cloud provider when deploying
or updating guest clusters. The Harvester release
notes
contain instructions on how to resolve this. See
#3750.
- If you’re using Rancher v2.7.2 with Harvester v1.1.1 clusters, you
Long-standing Known Issues - Backup/Restore
-
When migrating to a cluster with the Rancher Backup feature, the
server-url cannot be changed to a different location. It must continue
to use the same URL. -
Rancher v2.13.0:
- When performing a rollback from Rancher v2.13.0 to v2.12.3 using the
backup and restore operator (BRO), the restore does not complete
successfully. See
#844.
To work around this issue, you must scale down your Rancher
deployment and uninstall the Webhook chart before performing the
restore. For details, refer to this Knowledge Base
article.
- When performing a rollback from Rancher v2.13.0 to v2.12.3 using the
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream
provisioned K3s and RKE2 clusters may take longer to re-achieve
Active status after a migration. If you see that a downstream
cluster is still updating or in an error state immediately after a
migration, please let it attempt to resolve itself. This might take
up to an hour to complete. See
#34518 and
#42834.
- Due to the backoff logic in various components, downstream