OpenEBS v2.3 is our Hacktoberfest release with 40+ new contributors added to the project and ships with ARM64 support for cStor, Jiva, Dynamic Local PV. Mayastor seeing higher adoption rates resulting in further fixes and enhancements.
Here are some of the key highlights in this release:
ARM64 support (declared beta) for OpenEBS Data Engines - cStor, Jiva, Local PV (hostpath and device), ZFS Local PV.
- A significant improvement in this release is the support for multi-arch container images for amd64 and arm64. The multi-arch images are available on the docker hub and will enable the users to run OpenEBS in the Kubernetes cluster that has a mix of arm64 and amd64 nodes.
- In addition to ARM64 support, Local PV (hostpath and device) multi-arch container images include support for arm32 and power system.
- The arch-specific container images like
<image name>-amd64:<image-tag>, are also made available from docker hub and quay to support backward compatibility to users running OpenEBS deployments with arch-specific images.
- To upgrade your volumes to multi-arch containers, make sure you specify the
to-imageto the multi-arch image available from docker or your own copy of it.
- A special shout and many thanks to @xUnholy @shubham14bajpai, @akhilerm, and @prateekpandey14 for adding the multi-arch support to 27 OpenEBS container images generated from 14+ GitHub repositories. @wangzihao3, @radicand, @sgielen, @Pensu, and many more users from our slack community for helping with testing, feedback, and fixes by using the early versions of ARM64 builds in dev and production.
Enhanced the cStor Velero Plugin to help with automating the restore from an incremental backup. Restoring an incremental backup involves restoring the full backup (also called base backup and subsequent incremental backups till the desired incremental backup. With this release, the user can set a new parameter(
restoreAllIncrementalSnapshots) in the
VolumeSnapshotLocationto automate the restore of the required base and incremental backups. For detailed instructions to try this feature, please refer to this doc.
- Enhanced Node Disk Manager (NDM) to discover and create Block Device custom resources for device mapper(dm) devices like
luksencrypted devices, and
LVMdevices. Prior to this release, if users had to use dm devices, they would have to manually create the corresponding Block Device CRs.
- Enhanced the NDM block device tagging feature to reserve a block device from being assigned to Local PV (device) or cStor data engines. The block device can be reserved by specifying an empty value for the block device tag.
- Added support to install ZFS Local PV using Kustomize. Also updated the default upgrade strategy for the ZFS CSI driver to run in parallel instead of rolling upgrades.
- Several enhancements and fixes from the Community towards OpenEBS documentation, build and release scripts from the Hacktoberfest participation.
Key Bug Fixes
- Fixed an issue with the upgrade of cStor and Jiva volumes in cases where volumes are provisioned without enabling monitoring side car.
- Fixed an issue with the upgrade that would always set the image registry as
quay.io/openebs, when upgrade job doesn't specify the registry location. The upgrade job will now fallback to the registry that is already configured on the existing pods.
Other notable updates
- OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided.
- Significant work is underway to make it easier to install only the components that the users finally decide to use for their workloads. These features will allow users to run different flavors of OpenEBS in K8s clusters optimized for the workloads they intend to run in the cluster. This can be achieved in the current version using a customized helm values file or using a modified Kubernetes manifest file.
- Repositories are being re-factored to help simplify the contributor onboarding process. For instance, with this release, the
dynamic-localpv-provisionerhas been moved from openebs/maya to its own repository as openebs/dynamic-localpv-provisioner. This refactoring of the source code will also help with the simplified build and faster release process per data engine.
- Automation of further Day 2 operations like - setting of cStor target IP after the cstor volume has been restored from a backup (Thanks to @zlymeda), automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
- Keeping the OpenEBS generated Kubernetes custom resources in sync with the upstream Kubernetes versions, like moving CRDs from
Show your Support
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
A very special thanks to our first-time contributors to code, tests, and docs: @filip-lebiecki, @hack3r-0m, @mtzaurus, @niladrih, @Akshay-Nagle, @Aman1440, @AshishMhrzn10, @Hard-Coder05, @ItsJulian, @KaranSinghBisht, @Naveenkhasyap, @Nelias, @Shivam7-1, @ShyamGit01, @Sumindar, @Taranzz25, @archit041198, @aryanrawlani28, @codegagan, @de-sh, @harikrishnajiju, @heygroot, @hnifmaghfur, @iTechsTR, @iamrajiv, @infiniteoverflow, @invidian, @kichloo, @lambda2, @lucasqueiroz, @prakhargurunani, @prakharshreyash15, @rafael-rosseto, @sabbatum, @salonigoyal2309, @sparkingdark, @sudhinm, @trishitapingolia, @vijay5158, @vmr1532.
Prerequisite to install
- Kubernetes 1.14+ or newer release is installed.
- Kubernetes 1.17+ is recommended for using cStor CSI drivers.
- Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
- Make sure iSCSI Initiator is installed on the Kubernetes nodes.
- Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
- NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on
Install using kubectl
kubectl apply -f https://openebs.github.io/charts/2.3.0/openebs-operator.yaml
Install using Helm stable charts
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.3.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 2.3 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
- Upgrade OpenEBS Control Plane components.
- Upgrade Jiva PVs to 2.3, either one at a time or multiple volumes.
- Upgrade CStor Pools to 2.3 and its associated Volumes, either one at a time or multiple volumes.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
- OpenEBS Community for Support on Kubernetes Slack
- Raise an issue
- Subscribe and reach out on our OpenEBS CNCF Mailing lists
- Join Community Meetings
Major Limitations and Notes
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
- The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
- When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
- If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to
- Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in #2855.