OpenEBS has reached a significant milestone with v2.0 with support for cStor CSI drivers graduating to beta, improved NDM capabilities to manage virtual and partitioned block devices, and much more.
OpenEBS v2.0 includes the following Storage Engines that are currently deployed in production by various organizations:
- cStor (CSI Driver available from 2.0 onwards)
- ZFS Local PV
- Dynamic Local PV hostpath
- Dynamic Local PV Block
OpenEBS v2.0 also includes the following Storage Engines, going through alpha testing at a few organizations. Please get in touch with us, if you would like to participate in the alpha testing of these engines.
- Dynamic Local PV - Rawfile
For a change summary since v1.12, please refer to Release 2.0 Change Summary.
Here are some of the key highlights in this release:
- OpenEBS cStor provisioning with the new schema and CSI drivers has been declared as beta. For detailed instructions on how to get started with new cStor Operators please refer to the Quickstart guide. The new version of cStor Schema addresses the user feedback in terms of ease of use for cStor provisioning as well as to making it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim (SPC) pools will continue to function as-is and there is support available to migrate from SPC schema to new schema. In addition to supporting all the features of SPC based cStor pools, the CSPC ( cStor Storage Pool Cluster) enables the following:
- cStor Pool expansion by adding block devices to CSPC YAML
- Replace a block device used within cStor pool via editing the CSPC YAML
- Scale-up or down the cStor volume replicas via editing cStor Volume Config YAML
- Expand Volume by updating the PVC YAML
- Significant improvements to NDM in supporting (and better handling) of partitions and virtual block devices across reboots.
- OpenEBS Mayastor continues its momentum by adding support for Rebuild, NVMe-oF Support, enhanced supportability, and several other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
- Continuing the focus on additional integration and e2e tests for all engines, more documentation.
- Enhanced the Jiva target controller to track the internal snapshots and re-claim the space.
- Support for enabling/disabling leader election mechanism which involves interacting with kube-apiserver. In deployments where provisioners are configured with single replicas, the leader election can be disabled. The default value is enabled. The configuration is controlled via environment variable "LEADER_ELECTION" in operator yaml or via helm value (enableLeaderElection).
Key Bug Fixes:
- Fixes an issue where NDM would fail to wipe the filesystem of the released sparse block device.
- Fixes an issue with the mounting of XFS cloned volume.
- Fixes an issue when PV with fsType: ZFS will fail if the capacity is not a multiple of record size specified in StorageClass.
MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
Thanks, also to @akhilerm for being the 2.0 release coordinator.
Show your Support
A very special thanks to @yhrenlee for sharing the story in DoK Community, about how OpenEBS helped Arista with migrating their services to Kubernetes.
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
Prerequisite to install
- Kubernetes 1.14+ or newer release is installed.
- Kubernetes 1.17+ is recommended for using cStor CSI drivers.
- Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
- Make sure iSCSI Initiator is installed on the Kubernetes nodes.
- Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
- NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on
Install using kubectl
kubectl apply -f https://openebs.github.io/charts/2.0.0/openebs-operator.yaml
Install using Helm stable charts
helm repo add openebs https://openebs.github.io/charts helm repo update helm install --namespace openebs --name openebs openebs/openebs --version 2.0.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 2.0 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
- Upgrade OpenEBS Control Plane components.
- Upgrade Jiva PVs to 2.0, either one at a time or multiple volumes.
- Upgrade CStor Pools to 2.0 and its associated Volumes, either one at a time or multiple volumes.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
- OpenEBS Community for Support on Kubernetes Slack
- Raise an issue
- Subscribe and reach out on our OpenEBS CNCF Mailing lists
- Join Community Meetings
Major Limitations and Notes
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
- The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
- When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
- If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to
- Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in #2855.