github openebs/openebs v2.2.0

latest releases: v4.0.1, openebs-4.0.1, v4.0.0...
3 years ago

Release Summary

OpenEBS v2.2 comes with a critical fix to NDM and several enhancements to cStor, ZFS Local PV and Mayastor. Here are some of the key highlights in this release:

New capabilities

  • OpenEBS ZFS Local PV adds support for Incremental Backup and Restore by enhancing the OpenEBS Velero Plugin. For detailed instructions to try this feature, please refer to this doc.

  • OpenEBS Mayastor instances now expose a gRPC API which is used to enumerate block disk devices attached to the host node, as an aid to the identification of suitable candidates for inclusion within storage Pools during configuration. This functionality is also accessible within the mayastor-client diagnostic utility. For further details on enhancements and bug fixes in Mayastor, please refer to Mayastor release notes.

Key Improvements

  • Enhanced the Velero Plugin to restore ZFS Local PV into a different cluster or a different node in the cluster. This feature depends on the Velero velero.io/change-pvc-node: RestoreItemAction feature. openebs/velero-plugin#118.

  • The Kubernetes custom resources for managing cStor Backup and Restore have been promoted to v1. This change is backward compatible with earlier resources and transparent to users. When the SPC resources are migrated to CSPC, the related Backup/Restore resources on older volumes are also upgraded to v1. openebs/upgrade#59.

  • Enhanced the SPC to CSPC migration feature with multiple usability fixes like supporting migrating multiple volumes in parallel. openebs/upgrade#52, ability to detect the changes in the underlying virtual disk resources (BDs) and automatically update them in CSPC openebs/upgrade#53. Prior to this release, when migrating to CSPC, the user need to manually update the BDs.

  • Enhanced the Velero Plugin to use custom certificates for S3 object storage. openebs/velero-plugin#115.

  • Enhanced cStor Operators to allow users to specify the name of the new node for a previously configured cStor Pool. This will help with scenarios where a Kubernetes node can be replaced with a new node but can be attached with the block devices from the old node that contain cStor Pool and the volume data. openebs/cstor-operators#167.

  • Enhanced NDM OS discovery logic for nodes that use /dev/root as the root filesystem. openebs/node-disk-manager#492.

  • Enhanced NDM OS discovery logic to support excluding multiple devices that could be mounted as host filesystem directories. openebs/node-disk-manager#224.

Key Bug Fixes

  • Fixes an issue where NDM could cause data loss by creating a partition table on an uninitialized iSCSI volume. This can happen due to a race condition between NDM pod initializing and iSCSI volume initializing after a node reboot and if the iSCSI volume is not fully initialized when NDM probes for device details. This issue has been observed with NDM 0.8.0 released with OpenEBS 2.0 and has been fixed in OpenEBS 2.1.1 and OpenEBS 2.2.0 (latest) release.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @didier-durand, @zlymeda, @avats-dev, and many more contributing via Hacktoberfest.

Show your Support

Thank you @danielsand for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.2.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.2.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.2 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.2, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.2 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in #2855.

Don't miss a new openebs release

NewReleases is sending notifications on new releases.