github openebs/openebs v2.1.0

latest releases: v4.0.1, openebs-4.0.1, v4.0.0...
3 years ago

Release Summary

OpenEBS v2.1 is a developer release focused on code, tests and build refactoring along with some critical bug fixes and user enhancements. This release also introduces support for remote Backup and Restore of ZFS Local PV using OpenEBS Velero plugin.

Here are some of the key highlights in this release:

New capabilities:

  • OpenEBS ZFS Local PV adds support for Backup and Restore by enhancing the OpenEBS Velero Plugin. For detailed instructions to try this feature, please refer to this doc.
  • OpenEBS Mayastor continues its momentum by enhancing support for Rebuild and other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.

Key Improvements:

  • Enhanced the Velero Plugin to perform Backup of a volume and Restore of another volume to run simultaneously.
  • Added a validation to restrict OpenEBS Namespace deletion if there are pools or volumes configured. The validation is added via Kubernetes admission webhook.
  • Added support to restrict creation of cStor Pools (via CSPC) on Block Devices that are tagged (or reserved).
  • Enhanced NDM to automatically create a block device tag on the discovered device if the device matches a certain path name pattern.

Key Bug Fixes:

  • Fixes an issue where local backup and restore of cStor volumes provisioned via CSI were failing.
  • Fixes an issue where cStor CSI Volume remount would fail intermittently when application pod is restarted or after recovering from a network loss between application pod and target node.
  • Fixes an issue where BDC cleanup by NDM would cause a panic, if the bound BD was manually deleted.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @rohansadale, @AJEETRAI707, @smijolovic, @jlcox1970

Thanks, also to @sonasingh46 for being the 2.1 release coordinator.

Show your Support

Thank you @semekh (Hamravesh), @tobg(TOBG Services Ltd) for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.1.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.1.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.1 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.1, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.1 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in #2855.

Don't miss a new openebs release

NewReleases is sending notifications on new releases.