github openebs/openebs v1.11.0
on GitHub

Release Summary

The theme for OpenEBS v1.11 has been about polishing OpenEBS Storage engines Mayastor, ZFS Local PV, cStor CSI Driver, and preparing them for Beta. This release also includes several supportability enhancements and fixes for the existing engines.

For a detailed change summary, please refer to Release 1.11 Change Summary.

Before getting into the release details,

Important Announcement: OpenEBS Community Slack channels will be migrated to Kubernetes Slack Workspace by Jun 22nd

In the interest of neutral governance, the OpenEBS community support via slack is being migrated from openebs-community slack (a free version of slack with limited support for message retention) to the following OpenEBS channels on Kubernetes Slack owned by CNCF.

The #openebs-users channel will be marked as read-only by June 22nd.

More details about this migration can be found here.

Given that openebs-community slack has been a neutral home for many vendors that are offering free and commercial support/products on top of OpenEBS, the workspace will continue to live on. These vendors are requested to create their own public channels and the information about those channels can be communicated to users via the OpenEBS website by raising an issue/pr to https://github.com/openebs/website.


Here are some of the key highlights in this release:

New capabilities:

  • OpenEBS Mayastor continues its momentum by adding support for Rebuild, NVMe-oF Support, enhanced supportability and several other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
  • OpenEBS ZFS Local PV has been declared as beta. For detailed instructions on how to get started with ZFS Local PV please refer to the Quick start guide.
  • OpenEBS cStor CSI support is marked as feature-complete and further releases will focus on additional integration and e2e tests.

Key Improvements:

  • Enhanced helm charts to make NDM filterconfigs.state configurable. charts#107 (@fukuta-tatsuya-intec)
  • Added configuration to exclude rbd devices from being used for creating Block Devices charts#111 (@GTB3NW)
  • Added support to display FSType information in Block Devices node-disk-manager#438 (@harshthakur9030)
  • Add support to mount ZFS datasets using legacy mount property to allow for multiple mounts on a single node. zfs-localpv#151 (@pawanpraka1)
  • Add additional automation tests for validating ZFS Local PV and cStor Backup/Restore. (@w3aman @shashank855)

Key Bug Fixes:

  • Fixes an issue where volumes meant to be filesystem datasets got created as zvols due to misspelled case for StorageClass parameter. The fix makes the StorageClass parameters case insensitive zfs-localpv#144 (@cruwe)
  • Fixes an issue where the read-only option was not being set of ZFS volumes. zfs-localpv#137 (@pawanpraka1)
  • Fixes an issue where incorrect pool name or other parameters in Storage Class would result in stale ZFS Volume CRs being created. zfs-localpv#121 zfs-localpv#145 (@pawanpraka1)
  • Fixes an issue where the user configured ENV variable for MAX_CHAIN_LENGTH was not being read by Jiva. jiva#309 (@payes)
  • Fixes an issue where cStor Pool was being deleted forcefully before the replicas on cStor Pool were deleted. This can cause data loss in situations where SPCs are incorrectly edited by the user, and a cStor Pool deletion is attempted. maya#1710 (@mittachaitu)
  • Fixes an issue where a failure to delete the cStor Pool on the first attempt will leave an orphaned cStor custom resource (CSP) in the cluster. maya#1595 (@mittachaitu)

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests and docs: @cruwe, @sgielen, @ShubhamB99, @GTB3NW, @Icedroid, @fukuta-tatsuya-intec, @mtmn, @nrusinko, @radicand, @zadunn, @xUnholy,

We also are delighted to have @harshthakur9030, @semekh, @vaniisgh contributing to OpenEBS via the CNCF Community Bridge Program.

Thanks, also to @shubham14bajpai for being the 1.11 release co-ordinator.

Show your Support

Thank you @zadunn (Optoro), @meyskens, @stevefan1999-personal, @darias1986(DISID) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/1.11.0/openebs-operator.yaml

Install using Helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.11.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 1.11 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.11, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 1.11 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned, or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration features - which will easily migrate the clusters to the new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these instructions.
23 days ago