github openebs/openebs v1.12.0

latest releases: v4.0.1, openebs-4.0.1, v4.0.0...
3 years ago

Release Summary

The theme for OpenEBS v1.12 continues to be about polishing OpenEBS Storage engines Mayastor, cStor CSI Driver, and preparing them for Beta. A lot of efforts from the contributors were around evaluating more CI/CD and testing frameworks.

For a detailed change summary, please refer to Release 1.12 Change Summary.

Before getting into the release summary,


Important Announcement: OpenEBS Community Slack channels have migrated to Kubernetes Slack Workspace by Jun 22nd

The OpenEBS channels on Kubernetes Slack are:

More details about this migration can be found here.


Here are some of the key highlights in this release:

Breaking Change/Deprecation

  • Important Note for OpenEBS Helm Users: The repository https://github.com/helm/charts is being deprecated. All the charts are now being moved to Helm Hub or to project-specific repositories. OpenEBS charts have migrated to openebs/charts repository. Starting with 1.12.0, openebs can be installed via the following helm commands:
    helm repo add openebs https://openebs.github.io/charts
    helm repo update
    helm install --namespace openebs --name openebs openebs/openebs
    

Key Improvements:

Key Bug Fixes:

Alpha and Beta Engine updates

  • OpenEBS Mayastor continues its momentum by adding support for Rebuild, NVMe-oF Support, enhanced supportability, and several other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
  • OpenEBS ZFS Local PV has been declared as beta. For detailed instructions on how to get started with ZFS Local PV please refer to the Quick start guide.
  • OpenEBS cStor CSI support is marked as feature-complete and further releases will focus on additional integration and e2e tests. For detailed instructions on getting started with CSI driver for cStor, please refer to the Quick start guide

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @mikroskeem, @stuartpb, @utkudarilmaz

Thanks, also to @mittachaitu for being the 1.12 release coordinator.

Announcing new Maintainers/Reviewers

With gratitude and joy, we welcome the following members to the OpenEBS organization as reviewers for their continued contributions and commitment to help the OpenEBS project and community.

  • "Mehran Kholdi",@semekh,Hamravesh #control-plane-maintainers
  • "Michael Fornaro",@xUnholy,Independent-Raspbernetes #control-plane-maintainers
  • "Peeyush Gupta",@Pensu,DigitalOcean #control-plane-maintainers

Check out our full list of maintainers and reviewers here. Our Governance policy is here.

Show your Support

Thank you @dstathos and @mikroskeem for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/1.12.0/openebs-operator.yaml

Install using Helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.12.1

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 1.12 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.12, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 1.12 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned, or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in #2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration features - which will easily migrate the clusters to the new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these instructions.

Don't miss a new openebs release

NewReleases is sending notifications on new releases.