artifacthub helm/openebs/openebs 2.8.0
v2.8.0

latest releases: 3.10.0, 3.9.0, 3.8.0...
3 years ago

Release Summary

OpenEBS v2.8 is the another maintenance release before moving towards 3.0, and includes fixes and enhancements geared towards migrating non CSI volumes to CSI and improvements to E2e. This release also includes some key user-requested bug fixes and enhancements.


Important Announcement: KubeCon + CloudNativeCon Europe 2021 will take place May 4 - 7, 2021! Meet the OpenEBS maintainers and end-users to learn more about OpenEBS Roadmap, implementation details, best practices, and more. RSVP to one of the following events:


Component versions

The latest release versions of each of the engine are as follows:

Key Improvements

  • Updated the Kubernetes resources like CRDs, RBAC, CSIDriver, and Admission Controller used by OpenEBS project to v1, as the corresponding beta or alpha versioned objects will be deprecated in Kubernetes 1.22. This change requires that OpenEBS 2.8 release be used with Kubernetes 1.18 or higher.
  • Jiva CSI driver is promoted to beta. For instructions on how to set up and use the Jiva CSI driver, please see. https://github.com/openebs/jiva-operator. Major updates in this release include:
    • Upgrade support for Jiva volumes provisioned via CSI Driver
    • Migration of external-provisioner provisioned Jiva volumes to Jiva CSI Driver.
    • E2e tests for Jiva CSI volumes
  • Enhanced ZFS Local PV to allow users to set up custom finalizers on ZFS volumes. This will provide control to users to plug-in custom volume life-cycle operations. (openebs/zfs-localpv#302)
  • Enhanced ZFS Local PV volume creation with ImmediateBinding to attempt to pick a new node for volume, if the selected node couldn't provision the volume. (openebs/zfs-localpv#270)
  • LVM Local PV is promoted to beta. For instructions on how to set up and use the Local PV LVM CSI driver, please see. https://github.com/openebs/lvm-localpv. Major updates in this release include:
    • Enhance the capacity reporting feature by updating lvmetad cache, prior to reporting the current status.
    • E2e tests updated with resiliency tests.
  • OpenEBS Rawfile Local PV is promoted to beta. For instructions on how to set up and use, please see. https://github.com/openebs/rawfile-localpv

Key Bug Fixes

  • [Local PV - Device] Fixed an issue causing the Local PV provisioner to error out if the user had manually deleted the BDC. (#3363)
  • [NDM] Fixed an issue causing a crash in NDM after a block device was resized. (#3362)
  • Several fixes to docs were also included in this release.

Backward Incompatibilities

  • Kubernetes 1.18 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CSI components have been upgraded to:
      • k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1 (for Mayastor CSI volumes)
      • k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
      • k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from a version of cStor operators older than 2.6 to this version, you will need to manually delete the cStor CSI driver object prior to upgrading. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

  • The CRD API version has been updated for the cStor custom resources to v1. If you are upgrading via the helm chart, you might have to make sure that the new CRDs are updated. https://github.com/openebs/cstor-operators/tree/master/deploy/helm/charts/crds

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Working on automating further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.
  • Verify that PSP support is disabled by default as they are going to be deprecated in future versions of K8s.
  • Sample Grafana dashboards for managing OpenEBS are being developed here: https://github.com/openebs/charts/tree/gh-pages/grafana-charts

Show your Support

Thank you @jayheinlein from Sharecare, Inc. for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shoutouts!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

We are excited to welcome Harsh Thakur as maintainer for Local PV engines.

A very special thanks to our first-time contributors to code, tests, and docs: @etherealvisage, @ntdt, @centromere, @watcher00090, @t3hmrman

Getting Started

Prerequisite to install

  • Kubernetes 1.18+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.8.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.8.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.8 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.8, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.8 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Note: The community e2e pipelines verify upgrade testing only from non-deprecated releases (1.7 and higher) to 2.8. If you are running on release older than 1.7, OpenEBS recommends you upgrade to the latest version as soon as possible.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in #2855.

Don't miss a new openebs release

NewReleases is sending notifications on new releases.