github openebs/openebs v2.7.0

latest releases: openebs-4.1.0, v4.1.0, v4.0.1...
3 years ago

Release Summary

OpenEBS v2.7 is a maintenance release geared towards preparing for better structuring of the code and improving on the E2e frameworks. This release also includes some key user-requested bug fixes and enhancements.

The latest release versions of each of the engine are as follows:

Here are some of the key highlights in this release.

Key Improvements

  • [Mayastor] Several bug fixes to the Mayastor volumes and preparing to support ANA. See Mayastor release notes.
  • [Jiva] Hardening the Jiva CSI driver (introduced for alpha testing) with automated E2e tests and fixing some of the issues found in the e2e tests. The driver is available for alpha testing. For instructions on how to set up and use the Jiva CSI driver, please see. https://github.com/openebs/jiva-operator.
  • LVM Local PV is enhanced with additional features and some key bug fixes like:
    • Support for creating CSIStorageCapacity objects to allow dynamic provisioning based on available capacity. (openebs/lvm-localpv#21)
    • Enhance the provisioning workflow to handle capacity exhaustion on LVM group and allow the volume to be re-scheduled to another node. (openebs/lvm-localpv#23)
    • Automated E2e tests for provisioning and resize features.
  • ZFS Local PV added support for performing backup/restore on encrypted ZFS pools. (openebs/zfs-localpv#292)

Key Bug Fixes

  • Fixed an issue with Mayastor where an error in creating a nexus could cause a hang. (openebs/mayastor#765)
  • Fixed an issue with cStor that was causing a crash in arm64 due to invalid uzfs zc_nvlist_dst_size handling. (#3346)
  • Fixed an issue where NDM would not automatically de-activate a removed disk when GPTBasedUUID was enabled. (openebs/node-disk-manager#546)
  • Fixes an issue where Rawfile Local PV volume deletion would not delete the provisioned PV folders. (openebs/rawfile-localpv#5)
  • Fixes an issue which caused resize on btrs filesystems on top of Rawfile Local PV was failing.
  • Several fixes to docs were also included in this release.

Backward Incompatibilities

  • Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CSI components have been upgraded to:
      • k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1 (for Mayastor CSI volumes)
      • k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
      • k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from a version of cStor operators older than 2.6 to this version, you will need to manually delete the cStor CSI driver object prior to upgrading. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

  • The CRD API version has been updated for the cStor custom resources to v1. If you are upgrading via the helm chart, you might have to make sure that the new CRDs are updated. https://github.com/openebs/cstor-operators/tree/master/deploy/helm/charts/crds

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Working on automating further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.
  • PSP support has been added to ZFS Local PV and cStor helm charts.
  • Improving the OpenEBS Rawfile Local PV in preparation for its beta release. In the current release, fixed some issues as well as added support for setting resource limits on the sidecar and few other optimizations.
  • Sample Grafana dashboards for managing OpenEBS are being developed here: https://github.com/openebs/charts/tree/gh-pages/grafana-charts

Show your Support

Thank you Armel Soro, Art Win, @ssytnikov18 from Verizon Media, Mike T, grouchojeff for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shoutouts!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

We are excited to welcome Praveen Kumar G T as maintainer for Local PV engines.

A very special thanks to our first-time contributors to code, tests, and docs: @luizcarlosfaria, @z0marlin, @iyashu, @dyasny, @hanieh-m, @si458, @Ab-hishek

Getting Started

Prerequisite to install

  • Kubernetes 1.17+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.7.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.7.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.7 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.7, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.7 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Note: The community e2e pipelines verify upgrade testing only from non-deprecated releases (1.6 and higher) to 2.7. If you are running on release older than 1.6, OpenEBS recommends you upgrade to the latest version as soon as possible.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in #2855.

Don't miss a new openebs release

NewReleases is sending notifications on new releases.