github openebs/openebs v2.4.0

latest releases: v3.10.0, v3.9.0, v3.8.0...
3 years ago

Release Summary

OpenEBS v2.4 is our last monthly release for the year with some key enhancements and several fixes for the issues reported by the user community.

Note: With Kubernetes 1.20, SelfLink option has been removed which is used by the OpenEBS jiva and cStor (non-csi) provisioners. This causes the PVCs to remain in a pending state. The workaround and fix for this are being tracked under this issue. A patch release will be made available as soon as the fix has been verified on 1.20 platforms.

Here are some of the key highlights in this release:

New capabilities

  • ZFS Local PV has now been graduated to stable with all the supported features and upgrade tests automated via e2e testing. ZFS Local PV is best suited for distributed workloads that require resilient local volumes that can sustain local disk failures. You can read more about using the ZFS Local volumes at https://github.com/openebs/zfs-localpv and check out how ZFS Local PVs are used in production at Optoro.

  • OpenEBS is introducing a new NFS dynamic provisioner to allow the creation and deletion of NFS volumes using Kernel NFS backed by block storage. This provisioner is being actively developed and released as alpha. This new provisioner allows users to provision OpenEBS RWX volumes where each volume gets its own NFS server instance. In the previous releases, OpenEBS RWX volumes were supported via the Kubernetes NFS Ganesha and External Provisioner - where multiple RWX volumes share the same NFS Ganesha Server. You can read more about the new OpenEBS Dynamic Provisioner at https://github.com/openebs/dynamic-nfs-provisioner.

Key Improvements

  • Added support for specifying a custom node affinity label for OpenEBS Local Hostpath volumes. By default, OpenEBS Local Hostpath volumes use kubenetes.io/hostname for setting the PV Node Affinity. Users can now specify a custom label to use for PV Node Affinity. Custom node affinity can be specified in the Local PV storage class as follows:
    kind: StorageClass
    metadata:
      name: openebs-hostpath
      annotations:
        openebs.io/cas-type: local
        cas.openebs.io/config: |
          - name: StorageType
            value: "hostpath"
          - name: NodeAffinityLabel
            value: "openebs.io/custom-node-id"
    provisioner: openebs.io/local
    volumeBindingMode: WaitForFirstConsumer
    reclaimPolicy: Delete
    
    This will help with use cases like:
    • Deployments where kubenetes.io/hostname is not unique across the cluster (Ref: #2875)
    • Deployments, where an existing Kubernetes node in the cluster running Local volumes is replaced with a new node, and storage attached to the old node, is moved to a new node. Without this feature, the Pods using the older node will remain in the pending state.
  • Added a configuration option to the Jiva volume provisioner to skip adding replica node affinity. This will help in deployments where replica nodes are frequently replaced with new nodes causing the replica to remain in the pending state. The replica node affinity should be used in cases where replica nodes are not replaced with new nodes or the new node comes back with the same node-affinity label. (Ref: #3226). The node affinity for jiva volumes can be skipped by specifying the following ENV variable in the OpenEBS Provisioner Deployment.
         - name: OPENEBS_IO_JIVA_PATCH_NODE_AFFINITY
           value: "disabled"
    
  • Enhanced OpenEBS Velero plugin (cStor) to automatically set the target IP once the cStor volumes is restored from a backup. (Ref: openebs/velero-plugin#131). This feature can be enabled by updating the VolumeSnapshotLocal using configuration option autoSetTargetIP as follows:
    apiVersion: velero.io/v1
    kind: VolumeSnapshotLocation
    metadata:
      ...
    spec:
      config:
        ...
        ...
        autoSetTargetIP: "true"
    
    (Huge thanks to @zlymeda for working on this feature which involved co-ordinating this fix across multiple repositories).
  • Enhanced the OpenEBS Velero plugin used to automatically create the target namespace during restore, if the target namespace doesn't exist. (Ref: openebs/velero-plugin#137).
  • Enhanced the OpenEBS helm chart to support Image pull secrets. openebs/charts#174
  • Enhance OpenEBS helm chart to allow specifying resource limits on OpenEBS control plane pods. openebs/charts#151
  • Enhanced the NDM filters to support discovering LVM devices both with /dev/dm-X and /dev/mapper/x patterns. (Ref: #3310).

Key Bug Fixes

  • Fixed an issue that was causing Jiva Target to crash due to fetching stats while there was an ongoing IO. (Ref: openebs/jiva#334).
  • Fixed an issue where Jiva volumes failed to start if PodSecurityPolicies were setup. (Ref: #3305).
  • Fixed an issue where helm chart --dry-run would fail due to admission webhook checks. (Ref: openebs/maya#1771).
  • Fixed an issue where NDM would fail to discover block devices on encountering an issue with block device, even if there were other good devices on the node. (Ref: #3051).
  • Fixed an issue where NDM failed to detect OS disk on cos nodes, where the root partition entry was set as root=/dev/dm-0. (Ref: openebs/node-disk-manager#516).

Backward Incompatibilities

  • Velero has updated the configuration for specifying a different node selector during restore. The configuration changes from velero.io/change-pvc-node to velero.io/change-pvc-node-selector. ( Ref: openebs/velero-plugin#139)

Other notable updates

  • OpenEBS ZFS Local PV CI has been updated to include CSI Sanity tests and fixed some minor issue to confirm with CSI test suite. ( Ref: openebs/zfs-localpv#232).
  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided.
  • Significant work is underway to make it easier to install only the components that the users finally decide to use for their workloads. These features will allow users to run different flavors of OpenEBS in K8s clusters optimized for the workloads they intend to run in the cluster. This can be achieved in the current version using a customized helm values file or using a modified Kubernetes manifest file. We have continued to make some significant progress with the help of the community towards supporting individual helm charts for each of the storage engines. The location for the various helm charts are as follows:
    • Dynamic Local PV ( host path and device)
    • Dynamic Local PV CSI ( ZFS )
    • Dynamic Local PV CSI ( Rawfile )
    • cStor
    • Mayastor
  • Automation of further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Keeping the OpenEBS generated Kubernetes custom resources in sync with the upstream Kubernetes versions, like moving CRDs from v1beta1 to v1

Show your Support

Thank you @FeynmanZhou (KubeSphere) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @alexppg, @arne-rusek, @Atharex, @bobek, @Mosibi, @mpartel, @nareshdesh, @rahulkrishnanfs, @ssytnikov18, @survivant

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.4.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.4.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.4 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.4, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.4 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in #2855.

Don't miss a new openebs release

NewReleases is sending notifications on new releases.