github openebs/openebs v2.5.0

latest releases: v3.10.0, v3.9.0, v3.8.0...
3 years ago

Release Summary

A warm and happy new year to all our users, contributors, and supporters. πŸŽ‰ πŸŽ‰ πŸŽ‰.

Keeping up with our tradition of monthly releases, OpenEBS v2.5 is now GA with some key enhancements and several fixes for the issues reported by the user community. Here are some of the key highlights in this release:

New capabilities

  • OpenEBS has support for multiple storage engines, and the feedback from users has shown that users tend to only use a few of these engines on any given cluster depending on the workload requirements. As a way to provide more flexibility for users, we are introducing separate helm charts per engine. With OpenEBS 2.5 the following helm charts are supported.

    • openebs - This is the most widely deployed that has support for Jiva, cStor, and Local PV hostpath and device volumes.
    • zfs-localpv - Helm chart for ZFS Local PV CSI driver.
    • cstor-operators - Helm chart for cStor CSPC Pools and CSI driver.
    • dynamic-localpv-provisioner - Helm chart for only installing Local PV hostpath and device provisioners.

    (Special shout out to @sonasingh46, @shubham14bajpai, @prateekpandey14, @xUnholy, @akhilerm for continued efforts in helping to build the above helm charts.)

  • OpenEBS is introducing a new CSI driver for dynamic provisioning to Kubernetes Local Volumes backed by LVM. This driver is released as alpha and currently supports the following features.

    • Create and Delete Persistent Volumes
    • Resize Persistent Volume

    For instructions on how to set up and use the LVM CSI driver, please see. https://github.com/openebs/lvm-localpv

Key Improvements

  • Enhanced the ZFS Local PV scheduler to support spreading the volumes across the nodes based on the capacity of the volumes that are already provisioned. After upgrading to this release, capacity-based spreading will be used by default. In the previous releases, the volumes were spread based on the number of volumes provisioned per node. openebs/zfs-localpv#266.

  • Added support to configure image pull secrets for the pods launched by OpenEBS Local PV Provisioner and cStor (CSPC) operators. The image pull secrets (comma separated strings) can be passed as an environment variable (OPENEBS_IO_IMAGE_PULL_SECRETS) to the deployments that launch these additional pods. The following deployments need to be updated.

    • For Local PVs, the deployments openebs-localpv-provisioner and openebs-ndm-operator
    • For cStor CSI based PVs, the deployments cspc-operator and cvc-operator
      (Special thanks to @allenhaozi for helping with this fix. openebs/dynamic-localpv-provisioner#22)
  • Added support to allow users to specify custom node labels for allowedTopologies under the cStor CSI StorageClass. openebs/cstor-csi#135

Key Bug Fixes

  • Fixed an issue that could cause Jiva replica to fail to initialize if there was an abrupt shutdown of the node where the replica pod is scheduled during the Jiva replica initialization. openebs/jiva#337.
  • Fixed an issue that was causing Restore (with automatic Target IP configuration enabled) to fail if cStor volumes are created with Target Affinity with application pod. openebs/velero-plugin#141.
  • Fixed an issue where Jiva and cStor volumes will remain in pending state on Kubernetes 1.20 and above clusters. K8s 1.20 has deprecated SelfLink option which causes this failure with older Jiva and cStor Provisioners. #3314
  • Fixed an issue with cStor CSI Volumes that caused the Pods using cStor CSI Volumes on unmanaged Kubernetes clusters to remain in the pending state due to multi-attach error. This was caused due to cStor being dependent on the CSI VolumeAttachment object to determine where to attach the volume. In the case of unmanaged Kubernetes clusters, the VolumeAttachment object was not cleared by Kubernetes from the failed node and hence the cStor would assume that volume was still attached to the old node.

Backward Incompatibilities

  • Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CRD version has been upgraded to v1. (Thanks to the efforts from @RealHarshThakur, @prateekpandey14, @akhilerm)
    • The CSI components have been upgraded to:
      • quay.io/k8scsi/csi-node-driver-registrar:v2.1.0
      • quay.io/k8scsi/csi-provisioner:v2.1.0
      • quay.io/k8scsi/snapshot-controller:v4.0.0
      • quay.io/k8scsi/csi-snapshotter:v4.0.0
      • quay.io/k8scsi/csi-resizer:v1.1.0
      • quay.io/k8scsi/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from an older version of cStor Operators to this version, you will need to manually delete the cstor CSI driver object prior to upgrade. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor Operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Automation of further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.

Show your Support

Thank you @laimison (Renthopper) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @allenhaozi, @anandprabhakar0507, @Hoverbear, @kaushikp13, @praveengt

Getting Started

Prerequisite to install

  • Kubernetes 1.17+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.5.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.5.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.5 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.5, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.5 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in #2855.

Don't miss a new openebs release

NewReleases is sending notifications on new releases.