Release v2.14.1
Rancher v2.14.1 is the latest patch release of Rancher. This is a Community version release that introduces maintenance updates and bug fixes.
For more information on new features in the general minor release, see the v2.14.0 release notes.
Important
-
If you are using Active Directory Federation Service (AD FS), upgrading to Rancher v2.10.1 or later may cause issues with authentication, requiring manual intervention. These issues are due to the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata. For more information, see #48655. These issues can be corrected by either of two methods:
-
Updating the Relying Party Trust information from federation metadata (Relying Party Trust → Update from Federation Metadata…).
-
Directly adding the certificate (Relying Party Trust → Properties → Signature tab → Add → Select the certificate).
-
Authentication
Major Bug Fixes
- Fixed an issue with the Google OAuth authentication provider where when upgrading to Rancher v2.14.0 authentication would break preventing users from logging in. See #54416.
Cluster Provisioning
Behavior Changes
- Deprecation: Starting from Rancher v2.14.1, Cluster API Addon Provider Fleet (CAAPF) is being disabled by default in preparation for its deprecation in a future release of Rancher. While standard Fleet integration is still available through Rancher, additional steps are needed by users that upgrade to Rancher v2.14.1 and are running CAPI clusters that are provisioned using CAAPF. To enable the feature gate, you can set the
features.use-caapf.enabledtotruein the CAPI chart values, refer to Features and the Cluster API Addon Provider Fleet documentation for more information. See #2176.
K3s and RKE2 Provisioning
Major Bug Fixes
- Fixed an issue where after updating an RKE2 or K3s cluster to a supported Kubernetes version, the S3 snapshot retention would default to
5, ignoring any previously setsnapshotRetentionvalues. Refer to #13769 for more information. As a workaround, Rancher v2.14.0 introduced the ability to explicitly specifyetcd-s3-retentionto manage S3 snapshot retention independently. See also #53046.
Continuous Delivery (Fleet)
Major Bug Fixes
- Fixed an issue where under certain circumstances, drift correction in Fleet v0.15.0 would not work as expected with Helm v4. See the v0.15.0 changelog and #4878.
Changes Since v2.14.0
See the full list of changes.
Install/Upgrade Notes
If you’re installing Rancher for the first time, your environment must fulfill the installation requirements.
Important
-
Rancher now requires the cluster it runs on to have the Kubernetes API Aggregation Layer enabled. This is because Rancher extends Kubernetes with additional APIs by registering its own extension API server. Please note that all versions of Kubernetes supported in this Rancher versions K8s distributions (RKE2/K3s) will have the aggregation layer configured and enabled by default.
- Refer to the Extension API Server documentation and #50400 for more information.
-
Rancher Kubernetes Engine (RKE/RKE1) has reached end of life as of July 31, 2025. Rancher versions 2.12.0 and later no longer support provisioning or managing downstream RKE1 clusters. We recommend replatforming RKE1 clusters to RKE2 to ensure continued support and security updates. Learn more about the transition here.
- Rancher now has a pre-upgrade validation check for RKE1 resources which fails and lists the RKE1 resources if present. Refer to #50286 for more information.
-
It is crucial that you review the available disk space on your nodes and plan accordingly before upgrading to Rancher v2.12.0 and later to avoid potential disk pressure and pod eviction issues.
- For additional information, refer to the UI Server Side Pagination - Disk Space documentation.
-
Rancher now has an enablement option called
AUDIT_LOG_ENABLEDfor API Audit Logs for a Rancher installation.- In Rancher versions 2.11.x and earlier, only the
AUDIT_LEVELcould be set and the default log level (0) would disable the audit log. In Rancher versions 2.12.x and later, the default log level (0) now only contains the log request and response metadata, and can be set when configuringAUDIT_LOG_ENABLED. If installing or upgrading via Helm you can enable the API Audit Logs and specify the log level by applying the following setting to your Helm command:--set auditLog.enabled=true --set auditLog.level=0. See the Enabling the API Audit Log to Record System Events documentation and #48941.
- In Rancher versions 2.11.x and earlier, only the
Changes in Image Artifacts
Image artifact digests are renamed in Rancher v2.12.0, v2.11.4 and v2.10.8. Up until this change, separate image digests files for each operating system and architecture have been maintained for compatibility reasons. With this change, only one file for each operating system is to be provided:
-
The
rancher-images-digests-linux-amd64.txtandrancher-images-digests-linux-arm64.txtfiles are to be renamed torancher-images-digests-linux.txt. -
The
rancher-images-digests-windows-ltsc2019.txtandrancher-images-digests-windows-ltsc2022.txtfiles are to be renamed torancher-images-digests-windows.txt.
Upgrade Requirements
-
Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
-
Helm version requirements:
-
To manage Rancher 2.12.x and later, you must upgrade your Helm client to version 3.18 or newer.
-
This change is required to reflect the addition of Kubernetes 1.33 support with this release.
-
Currently, the official Helm Version Support Policy dictates that only Helm 3.18 supports the proper Kubernetes version range for Rancher 2.12.
-
-
CNI requirements:
-
For Kubernetes v1.19 and later, disable firewalld as it’s incompatible with various CNI plugins. See #28840.
-
When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #1317.
-
-
Requirements for air-gapped environments:
-
When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
NO_PROXY. See the documentation and issue #2725. -
When installing Rancher with Docker in an air-gapped environment, you must supply a custom
registries.yamlfile to thedocker runcommand, as shown in the K3s documentation. If the registry has certificates, then you’ll also need to supply those. See #28969.
-
Versions
Images
- rancher/rancher:v2.14.1
Tools
- CLI - v2.14.1
Kubernetes Versions for RKE2/K3s
-
v1.35.4 (Default)
-
v1.34.7
-
v1.33.11
Rancher Helm Chart Versions
In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #32294.
Previous Rancher Behavior Changes
Previous Rancher Behavior Changes - Rancher General
-
Rancher v2.14.0:
-
Rancher v2.14 removes support for Kubernetes 1.32. See #53764.
-
The Rancher Ingress resource no longer deploys
ingress-nginx-specific annotations (nginx.ingress.kubernetes.io/proxy-connect-timeout,nginx.ingress.kubernetes.io/proxy-read-timeout, andnginx.ingress.kubernetes.io/proxy-send-timeout) by default. These annotations are no longer required for performance, and removing them improves compatibility with other ingress controllers following the announcement ofingress-nginxretirement. See #53272.
-
-
Rancher v2.13.0:
- Official support for Kubernetes v1.31 and older versions has been removed. You can no longer provision new RKE2 or K3s clusters using the Kubernetes versions that fall outside of the supported range (v1.32 - v1.34). See #51253.
Previous Rancher Behavior Changes - Authentication
-
Rancher v2.14.0:
-
Deprecation of
tokens.management.cattle.io.-
Rancher v2.13 introduced a new type of token resource in the
ext.cattle.ioAPI group to serve as Rancher’s public API for tokens. The previous token resources in themanagement.cattle.ioAPI group (now referred to aslegacy tokens,norman tokens, orv3 tokensdepending on the context) are being phased out. -
Previously, the new
tokens.ext.cattle.ioresources were only accessible viakubectl. Starting in Rancher v2.14, the Rancher UI provides basic support for these tokens, allowing you to create, view, list, and delete them. -
In future releases, all uses of
tokens.management.cattle.iowill incrementally transition totokens.ext.cattle.io, eventually leading to the complete removal of support fortokens.management.cattle.io. -
See #2220.
-
-
Previous Rancher Behavior Changes - Backup/Restore
-
Rancher v2.14.0:
- When performing a rollback from Rancher v2.14 to Rancher v2.13 using the Backup/Restore Operator, the restore process requires all Rancher-related resources to be cleaned up on the upstream (local) cluster. For more information on the required rollback steps, refer to the {rn-rollback}[Rancher documentation]. See also the above release note on Cluster API v1beta2, introduced by the cluster-api v1.12.2 upgrade, and #916.
Previous Rancher Behavior Changes - Cluster Provisioning
-
Rancher v2.14.0:
-
Embedded Cluster API removed: The built-in Rancher Provisioning Cluster API functionality (
rancher-provisioning-capi) has been removed in Rancher v2.14.0, as Rancher Turtles is now the default mechanism for deploying CAPI CRDs and controllers. This includes:-
The
embedded-cluster-apifeature flag. -
The
rancher-provisioning-capiHelm chart. -
Related webhooks and controllers.
-
Migration Path: The migration to Rancher Turtles happens automatically during the upgrade to v2.14.0. Rancher Turtles is now the only supported method for Cluster API (CAPI) integration with Rancher. Unless you are using a certificated CAPI provider installed through Turtles, no manual action is required.
However, if you had previously disabled Rancher Turtles, you will need to manually re-enable it. A warning will let you know during Rancher startup if Turtles is disabled and that cluster provisioning will not operate as expected.
For more information on installing and using Rancher Turtles, refer to the Cluster API integration documentation. See #53291.
-
-
Previous Rancher Behavior Changes - Continuous Delivery (Fleet)
-
Rancher v2.14.0:
-
Fleet v0.15.0 has been migrated from Helm v3 to Helm v4. For more details, see #4351.
-
The Fleet
imagescanfeature, which has been provided as experimental, is now disabled by default and will be deprecated in a future release. With this feature disabled, Git repository paths referenced byGitReporesources will fail to deploy if they containimageScansin theirfleet.yamlor equivalent configuration files. If you still require this feature, you can explicitly enable it by settingimagescan.enabled=truewhen installing Fleet. For more information, refer to the Image Scan documentation and the guide on enabling imagescan. See #4671.
-
Previous Rancher Behavior Changes - RKE2 Provisioning
-
Rancher v2.14.0:
- Deprecation The
v1alpha1API used in Cluster API Provider RKE2 (CAPRKE2) is deprecated. See #797.
- Deprecation The
Previous Rancher Behavior Changes - Rancher CLI
-
Rancher v2.14.0:
- The Rancher CLI
tokencommand now supports authorization code flow with Microsoft Entra ID (formerly Azure AD) as an alternative to device code flow, which addresses environments where device code flow is blocked by security policies. Use--auth-flow authcodeor setCATTLE_OAUTH_AUTH_FLOW=authcode. See #52404.
- The Rancher CLI
Previous Rancher Behavior Changes - Rancher App (Global UI)
-
Rancher v2.13.0:
-
The Rancher UI for provisioning hosted Kubernetes clusters (AKS, EKS, and GKE) has been updated to align with the new Cluster Provisioning v2 (kev2) framework. This change replaces the reliance on the older kontainerdriver (kev1) resources to determine which hosted providers are available for display. The UI now uses a new setting to manage the visibility of these providers, ensuring consistency and future compatibility. See #15391.
-
Rancher’s session inactivity logic has been moved from the UI to the backend. A new session TTL setting
auth-user-session-idle-ttl-minuteswas introduced, and it sets the maximum time a user is allowed to be idle within a browser session before the session expires. To enable the idle timeout feature, you must supplyauth-user-session-idle-ttl-minutesand set it to a value lower than the existing absolute session limit,auth-user-session-ttl-minutes. This new backend-driven mechanism, along with its associated TTL setting, replaces the previous session timeout configuration in the UI under Global Settings > Performance. See #12552.
-
Future Rancher Behavior Changes
Retention Policy for Rancher App Charts
To improve repository performance, Rancher is introducing a lifecycle management policy for charts available in the Apps feature of Rancher, specifically in the "Rancher" repository.
-
The Policy: Rancher will transition from a cumulative model (retaining all historical versions forever) to a retention model that preserves chart versions for the seven (7) most recent Rancher minor releases (approximately a 2.5-year window).
-
Timeline - Rancher v2.13 & v2.14: Legacy chart versions (older than the 7-version window) remain available.
-
Rancher v2.15: This will be the first version to enforce the policy. Versions falling outside the 7-version window and older than two years will no longer be available.
Impact: This change is non-destructive for existing Rancher installations. Historical versions will remain accessible but will not be available in newer release branches once they age out of the 7-version window. You are advised to upgrade your applications before upgrading to Rancher v2.15. Uninstallation after v2.15, and replacement with an updated version, will still be possible.
Long-standing Known Issues
Long-standing Known Issues - Cluster Provisioning
-
Not all cluster tools can be installed on a hardened cluster.
-
Rancher v2.8.1:
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
[ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by runningkubectl edit clusters.cluster clustername -n fleet-defaultand setspec.unpausedtofalse. See #43735.
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
Long-standing Known Issues - RKE2 Provisioning
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.
-
Rancher v2.7.6:
Long-standing Known Issues - K3s Provisioning
-
Rancher v2.7.6:
-
Rancher v2.7.2:
- Clusters remain in an
Updatingstate even when they contain nodes in anErrorstate. See #39164.
- Clusters remain in an
Long-standing Known Issues - Rancher App (Global UI)
-
Rancher v2.10.0:
- After deleting a Namespace or Project in the Rancher UI, the Namespace or Project remains visible. As a workaround, refresh the page. See #12220.
-
Rancher v2.9.2:
- Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #11922.
-
Rancher v2.7.7:
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
_in theCluster Namefield. See #9416.
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
Long-standing Known Issues - EKS
-
Rancher v2.7.0:
- EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #39392.
Long-standing Known Issues - Authentication
-
Rancher v2.9.0:
-
There are some known issues with the OpenID Connect provider support:
-
When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #46104.
-
When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #46105.
-
When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project:
projectroletemplatebindings.management.cattle.io is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "management.cattle.io" in the namespace "p-9t5pg". However, the project is still created. See #46106.
-
-
Long-standing Known Issues - Rancher Webhook
-
Rancher v2.7.2:
-
A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
- If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #40816.
-
Long-standing Known Issues - Virtualization Management (Harvester)
-
Rancher v2.13.1:
- When upgrading to Rancher v2.13.1 or higher while using Harvester v1.6.1, users may encounter an issue with their Load Balancers in downstream clusters using the Harvester Cloud Provider, and must perform the following workaround to instruct Calico to not use any of the IP/interface managed by
kube-vip. See #9767.
- When upgrading to Rancher v2.13.1 or higher while using Harvester v1.6.1, users may encounter an issue with their Load Balancers in downstream clusters using the Harvester Cloud Provider, and must perform the following workaround to instruct Calico to not use any of the IP/interface managed by
-
Rancher v2.7.2:
- If you’re using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won’t be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #3750.
Long-standing Known Issues - Backup/Restore
-
When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
-
Rancher v2.13.0:
- When performing a rollback from Rancher v2.13.0 to v2.12.3 using the backup and restore operator (BRO), the restore does not complete successfully. See #844. To work around this issue, you must scale down your Rancher deployment and uninstall the Webhook chart before performing the restore. For details, refer to this Knowledge Base article.
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.