github cloudposse/terraform-aws-eks-node-group 0.11.0
v0.11.0

latest releases: 2.12.0, v2, 2.11.0...
3 years ago

🚀 Enhancements

Optional Create before destroy. Add Launch Template and related features. @Nuru (#31)

what

Implement "create before destroy" for zero downtime node group updates. This is optional and off by default, because on first use it will cause any existing node groups created with this module to be destroyed and then replaced, causing the same kind of outage this feature will prevent after it is activated.

  • Because node groups must have unique names within a cluster, creating a new node group before destroying the old one requires node groups to have random names. This is implemented by adding a 1-word random pet name to the end of the static node group name. Turning this on (or turning it off after it has been on) will cause previously created node groups to be replaced because of the change in name.

Add features previously missing here but present in terraform-aws-eks-workers, to the extent supported by AWS, such as

  • Set nodes to launch with Kubernetes taints
  • Specify launch template (not all features supported by AWS, see "Launch template - Prohibited" in AWS documentation)
  • Specify AMI for nodes
  • Arbitrary bootstrap.sh options
  • Arbitrary kubelet options
  • "Before join" and "after join" scripting

why

  • Many kinds of node group changes require Terraform to replace the existing node group with a new one. The default Terraform behavior is to delete the old resource before creating the new one, since many resources (such as node group) require unique names, so you cannot create the new resource while the old one exists. However, this results in the node group being completely destroyed, and therefore offline for several minutes, which is usually an unacceptable outage. Now you can avoid this by setting create_before_destroy = true.
  • Useful features previously unavailable, bring closer to feature parity with terraform-aws-eks-workers.

caveats

When using create before destroy

We cannot automatically detect when the node_group will be destroyed and generate a new name for it. Instead, we have tried to cause a new name to be generated when anything changes that would cause a the node group to be destroyed. This may not be perfect. If the name changes unnecessarily, it will trigger a node group replacement, which should be tolerable. If the name fails to change when it needs to, the Terraform apply will fail with an error about the resource already existing. Please let us know what change we missed so we can update the module. Meanwhile, you can get around this by manually "tainting" the random_pet as explained below.

For a short period of time you will be running 2 node groups.

  • There may still be service outages related to pods and EBS volumes transferring from the old node group to the new one, though this should generally behave like the cluster rapidly scaling up and rapidly scaling back down. If you have issues with autoscaling, such as running single replicas with minAvailable: 25% (which rounds up to minAvailable: 1), preventing the pod from being drained from a node, you may have issues with node groups being replaced.
  • Your AWS service quotas need to be large enough to run 2 sets of node groups at the same time. If you do not have enough quota for that, launching the new node group will fail. If the new node group launch fails, you will need to manually taint the random_pet resource because while Terraform tries to replace the tainted new node group, it will try to do so with the same name (and fail) unless you also taint random_pet. Assuming you invoked the module as module "eks_node_group", you would taint random_pet with
terraform taint 'module.eks_node_group.random_pet.cbd[0]'

Using new features

Many of the new features of this module rely on new AWS features, and it is unclear to what extent they actually work.

  • It appears that it is still not possible to tag the Auto Scaling Group or Launch Template with extra tags for the Kubernetes Cluster Autoscaler.
  • It appears that it is still not possible to propagate tags to elastic GPUs or spot instance requests.
  • There may be other issues similarly beyond our control.
  • There are many new features in this module and it has not be comprehensively tested, so be cautious and test your use cases out on non-critical clusters before moving this into production.

Most of the new features require this module to create a Launch Template, and of course you can now supply your own launch template (referenced by name). There is some overlap between settings that can be made directly on an EKS managed node group and some that can be made in a launch template. This results in settings being allowed in one place and not in the other: these limitations and prohibitions are detailed in the AWS documentation. This module attempts to resolve these differences in many cases, but some limitations remain:

  • Support for remote access using SSH is not supported when using a launch template created by this module. Correctly configuring the launch template for remote access is tricky because it interferes with automatic configuration of access by the Kubernetes control plane. We do not need it and cannot test it at this time, so we do not support it, but if you need it, you can create your own launch template that has the desired configuration and leave the ec2_ssh_key setting null.
  • If you supply the Launch Template, this module requires that the Launch Template specify the AMI Image ID to use. This requirement could be relaxed in the future if we find demand for it.
  • In general, this module assumes you are using an Amazon Linux 2 AMI, and supports selecting the AMI by Kubernetes version or AMI release version. If you are using some other AMI that does not support Amazon's bootstrap.sh, most of the new features will not work. You will need to implement them yourself on your AMI. You can provide arbitrary (Base64 encoded) User Data to your AMI via userdata_override.
  • No support for spot instances specified by launch template (EKS limitation).
  • No support for shutdown behavior or "Stop - Hibernate" behavior in launch template (EKS limitation).
  • No support for IAM instance profile or Subnets via Launch Template (EKS limitation). You can still supply subnets via subnet_ids and the module will apply the via the node group configuration.

references

Many of the new features are made possible by EKS adding support for launch templates.

Don't miss a new terraform-aws-eks-node-group release

NewReleases is sending notifications on new releases.