Features
- Kubernetes v1.5.1 #166
- Configurable instance tenancy (#146 thanks to @iameli )
- Source of truth for updating cluster-autoscaler #151
- DNS horizontal autoscaling (#178)
- Add
customSettings
tocluster.yaml
(#209, thanks to @redbaron)
Improvements
- Calico self hosted integration (#124, thanks to @heschlie)
- Controller nodes are now schedulable/tainted #150
- Conform node pools powered by Spot Fleet to ones powered by ASG #167
- Min replicas for kube-dns is now not 1 but 2 for availability reason (#178)
- Update kube-dns to the one bundled with k8s v1.5.1 (#177)
- Move from the deprecated m3.medium instance type to the marginally better t2.medium (#184)
- Use computed stack name to ensure node pools are nested within the cluster (#187, thanks to @icereval)
- Allow etcd cluster health check via port 2379 (#191, thanks to @jgmize)
- Setups labels following the same used on kops (#221, thanks to @gianrubio)
- Update the kubernetes-dashboard to 1.5.1 (#228, thanks to @ankon)
- Various Bash improvements (#217, thanks to @redbaron)
Fixes
- Fix kube-node-label failure when there is a whitespace in security group name (#163, thanks to @tarvip)
- Correct node pool command inconsistencies/Remove deprecated node pools render command (#174, thanks to @c-knowles)
- Add missing validations for a node pool powered by Spot Fleet (#179)
- Controller node not being properly tainted (#199, thanks to @artushin)
- Handle error from ReadOrCreateEncryptedTLSAssets gracefully. (#188, thanks to @andrejvanderzee)
- Fix typo "etcdDataVolumeEphemeral" (#194, thanks to @jgmize)
- Report error if assets packing failed (#204, thanks to @redbaron)
- Typo in node-pools render stack message (#225, thanks to @whereisaaron)
- Don't block ICMP for API ELB (#220, thanks to @whereisaaron)
- Change taint-and-uncordon worker task to use docker for now (#231, thanks to @whereisaaron)
- Fix kubectl logs problem due to apiserver config (#223, thanks to @mgilbir)
Documentation
workerCount
should explicitly be set to zero if you'd like to have no worker in a main clusterkube-aws update
doesn't work when decreasing number of workers down to zero as of todaykube-aws node-pools update
on a spot-fleet-based node pool would possibly result in some downtime- Address the issue #121
- Point to the node-pool documentation: (aws-experimental-features) is a dead link, but in this case the intended target seems to be the node-pool documentation. (#200, thanks to @ankon)
- Fix the SAN for non-us-east-1 AWS regions (#201, thanks to @ankon)
- Removed add cluster logging link/Fix broken links iscsi, host-dns, and rdb (#207, thanks to @reiinakano)
- Document different account may have different supported AZs and solutions (#210, thanks to @helinwang)
- Update e2e/README.md: Add description for each environment variable used for customizing the test cluster (3f47616)
- Update kubernetes-on-aws-limitations.md (dec2402)
- Update ROADMAP.md (c39793c)
- Fix word to match other headings (#235, thanks to @ankon)
Full change log
v0.9.2...v0.9.3