github cloudposse/terraform-aws-eks-cluster 0.42.0
v0.42.0 Fix EKS upgrades, enable choice of service role

latest releases: v4.4.0, v4, v4.3.0...
3 years ago

This release resumes compatibility with v0.39.0 and adds new features:

  • Releases and builds on PR #116 by providing the create_eks_service_role option that allows you to disable the automatic creation and assignment of a service role to the EKS cluster and use one you provide instead. (Breaking change from v0.41.0: create_eks_service_role defaults to false and must be explicitly set. It is not enough just to set eks_cluster_service_role_arn to a non-empty string.)
  • In an effort to cope with persistent issues surrounding the management of the aws-auth ConfigMap, this release:
    • Provides a new default Kubernetes Terraform provider configuration that works in more circumstances. In particular, it works when upgrading the EKS cluster to a new minor version of Kubernetes, which v0.39.0 could not do. Note: this configuration includes a hack, dummy_kubeapi_server, which works around current issues with the Kubernetes provider and the way Terraform handles provider initialization. We are not committed to supporting this hack, and if it starts to cause problems you can always disable it by setting dummy_kubeapi_server to null.
    • Provides additional guidance in the README on how the module works and configuration issues people have run into.
    • Provides 3 options for how to configure the Kubernetes Terraform provider:
      1. Using an auth token retrieved by an aws_eks_cluster_auth data source (kube_data_auth_enabled). This is the mechanism used in v0.39.0 and remains the default, but now it is only 1 of 3 options. Note: This is the only configuration we are committed to supporting long-term. The other 2 options below are hacks to work around current issues with the Kubernetes provider (v2.3.2) and the way Terraform handles provider initialization, and may be deprecated at any time. Even this option is planned to be deprecated and the Kubernetes provider removed entirely from the module once the AWS provider provides the capability of modifying the aws-auth ConfigMap. (See request to add API to modify aws-auth)
      2. Using an auth token retrieved using the Kubernetes exec feature to run the AWS CLI command aws eks get-token with the further option of setting the --profile or --role-arn options to that command (kube_exec_auth_enabled, kube_exec_auth_aws_profile_enabled, kube_exec_auth_aws_profile, kube_exec_auth_role_arn_enabled, kube_exec_auth_role_arn). This option seems to work better than the data source method (avoids issues of stale or cached tokens), provided you have aws available and configured properly. (The data source method is the default because it requires no external utilities or additional configuration.) Note: As stated above, this option is a workaround for external issues, and we are not committed to supporting it long-term. Also, this option has external dependencies (see #123, #124), and was broken by changes in AWS CLI versions 1.20.9 and 2.2.24 (fixed in subsequent releases of the CLI). We are not going address issues where this option does not work because of external dependencies or configuration problems. If this does not work for you, please use kube_data_auth_enabled.
      3. Using a kubeconfig file to configure access to the cluster. This option seems to work best, but of course you cannot provide a kubeconfig file to access a cluster before you create the cluster. Also, if you generate a kubeconfig file, you must ensure that it remains available, which can be an issue with automated systems that start each task with a "clean installation". Note: We know this option does not work in some circumstances and we are not going to do anything about it. This option is available for people to use when it works for them, and in particular to enable users to import resources while hashicorp/terraform#27934 remains open, but we have no plans to support or enhance it and in general will not consider failures of this option to be bugs. Use it if it works for you, and if it does not, then please use the supported kube_data_auth_enabled option instead.
  • Adds aws_auth_yaml_strip_quotes to toggle whether or not aws-auth YAML is generated with or without quotes. Terraform will show a diff during plan if this module generates YAML with quotes but the data source returns YAML without quotes, or vice versa. Whether the data source returns YAML with or without quotes seems to depend on what Kubernetes version the EKS cluster is running.

Enhance Kubernetes provider configuration @Nuru (#119)

what

  • Make Kubernetes provider configuration more robust, and provide alternative options.
  • Enhance and correct README
  • Make explicit the dependency of EKS cluster on Security Group rules
  • Revert PR #114
  • Add create_eks_service_role option
  • Add aws_auth_yaml_strip_quotes to toggle whether or not aws-auth YAML is generated with or without quotes
  • Update Cloud Posse standard GitHub configuration to current

why

  • Closes #58, closes #63, closes #104, closes #118
  • Closes #106
  • Closes #112
  • Undo breaking changes made prematurely
  • Enhance PR #116 feature so that it does not run into problems with derived values, for example if the service role being passed in is created in the root module at the same time as this cluster is being created
  • Terraform will show a diff during plan if this module generates YAML with quotes but the data source returns YAML without quotes, or vice versa. Whether the data source returns YAML with or without quotes seems to depend on what Kubernetes version the EKS cluster is running.
  • Routine maintenance

Don't miss a new terraform-aws-eks-cluster release

NewReleases is sending notifications on new releases.