github spdk/spdk v16.06
v16.06: NVMf userspace target

latest releases: v24.01, LTS, v24.05-pre...
7 years ago

This release adds a userspace NVMf (NVMe over Fabrics) target, conforming to the
newly-released NVMf 1.0/NVMe 1.2.1 specification. The NVMf target exports NVMe
devices from a host machine over the network via RDMA. Currently, the target is
limited to directly exporting physical NVMe devices, and the discovery subsystem
is not supported.

This release includes a general API cleanup, including renaming all declarations
in public headers to include a spdk prefix to prevent namespace clashes with
user code.

  • NVMe
    • The nvme_attach() API was reworked into a new probe/attach model, which
      moves device detection into the NVMe library. The new API also allows
      parallel initialization of NVMe controllers, providing a major reduction in
      startup time when using multiple controllers.
    • I/O queue allocation was changed to be explicit in the API. Each function
      that generates I/O requests now takes a queue pair (spdk_nvme_qpair *)
      argument, and I/O queues may be allocated using
      spdk_nvme_ctrlr_alloc_io_qpair(). This allows more flexible assignment of
      queue pairs than the previous model, which only allowed a single queue
      per thread and limited the total number of I/O queues to the lowest number
      supported on any attached controller.
    • Added support for the Write Zeroes command.
    • examples/nvme/perf can now report I/O command latency from the
      the controller's viewpoint using the Intel vendor-specific read/write latency
      log page.
    • Added namespace reservation command support, which can be used to coordinate
      sharing of a namespace between multiple hosts.
    • Added hardware SGL support, which enables use of scattered buffers that
      don't conform to the PRP list alignment and length requirements on supported
      NVMe controllers.
    • Added end-to-end data protection support, including the ability to write and
      read metadata in extended LBA (metadata appended to each block of data in the
      buffer) and separate metadata buffer modes.
      See spdk_nvme_ns_cmd_write_with_md() and spdk_nvme_ns_cmd_read_with_md()
      for details.
  • IOAT
    • The DMA block fill feature is now exposed via the ioat_submit_fill()
      function. This is functionally similar to memset(), except the memory is
      filled with an 8-byte repeating pattern instead of a single byte like memset.
  • PCI
    • Added support for using DPDK for PCI device mapping in addition to the
      existing libpciaccess option. Using the DPDK PCI support also allows use of
      the Linux VFIO driver model, which means that SPDK userspace drivers will work
      with the IOMMU enabled. Additionally, SPDK applications may be run as an
      unprivileged user with access restricted to a specific set of PCIe devices.
    • The PCI library API was made more generic to abstract away differences
      between the underlying PCI access implementations.

Don't miss a new spdk release

NewReleases is sending notifications on new releases.