github delta-io/delta v2.4.0
Delta Lake 2.4.0

latest releases: v3.1.0, v3.1.0rc3, v3.1.0rc2...
11 months ago

We are excited to announce the release of Delta Lake 2.4.0 on Apache Spark 3.4. Similar to Apache Spark™, we have released Maven artifacts for both Scala 2.12 and Scala 2.13.

The key features in this release are as follows

  • Support for Apache Spark 3.4.
  • Support writing Deletion Vectors for the DELETE command. Previously, when deleting rows from a Delta table, any file with at least one matching row would be rewritten. With Deletion Vectors these expensive rewrites can be avoided. See What are deletion vectors? for more details.
  • Support for all write operations on tables with Deletion Vectors enabled.
  • Support PURGE to remove Deletion Vectors from the current version of a Delta table by rewriting any data files with deletion vectors. See the documentation for more details.
  • Support reading Change Data Feed for tables with Deletion Vectors enabled.
  • Support REPLACE WHERE expressions in SQL to selectively overwrite data. Previously “replaceWhere” options were only supported in the DataFrameWriter APIs.
  • Support WHEN NOT MATCHED BY SOURCE clauses in SQL for the Merge command.
  • Support omitting generated columns from the column list for SQL INSERT INTO queries. Delta will automatically generate the values for any unspecified generated columns.
  • Support the TimestampNTZ data type added in Spark 3.3. Using TimestampNTZ requires a Delta protocol upgrade; see the documentation for more information.
  • Other notable changes
    • Increased resiliency for S3 multi-cluster reads and writes.
      • Use a per-JVM lock to minimize the number of concurrent recovery attempts. Concurrent recoveries may cause concurrent readers to see a RemoteFileChangedException.
      • Catch any RemoteFileChangedException in the reader and retry reading.
    • Allow changing the column type of a char or varchar column to a compatible type in the ALTER TABLE command. The new behavior is the same as in Apache Spark and allows upcasting from char or varchar to varchar or string.
    • Block using overwriteSchema with dynamic partition overwrite. This can corrupt the table as not all the data may be removed, and the schema of the newly written partitions may not match the schema of the unchanged partitions.
    • Return an empty DataFrame for Change Data Feed reads when there are no commits within the timestamp range provided. Previously an error would be thrown.
    • Fix a bug in Change Data Feed reads for records created during the ambiguous hour when daylight savings occurs.
    • Fix a bug where querying an external Delta table at the root of an S3 bucket would throw an error.
    • Remove leaked internal Spark metadata from the Delta log to make any affected tables readable again.

Note: the Delta Lake 2.4.0 release does not include the Iceberg to Delta converter because iceberg-spark-runtime does not support Spark 3.4 yet. The Iceberg to Delta converter is still supported when using Delta 2.3 with Spark 3.3.

Credits

Alkis Evlogimenos, Allison Portis, Andreas Chatzistergiou, Anton Okolnychyi, Bart Samwel, Bo Gao, Carl Fu, Chaoqin Li, Christos Stavrakakis, David Lewis, Desmond Cheong, Dhruv Shah, Eric Maynard, Fred Liu, Fredrik Klauss, Haejoon Lee, Hussein Nagree, Jackie Zhang, Jintian Liang, Johan Lasperas, Lars Kroll, Lukas Rupprecht, Matthew Powers, Ming DAI, Ming Dai, Naga Raju Bhanoori, Paddy Xu, Prakhar Jain, Rahul Shivu Mahadev, Rui Wang, Ryan Johnson, Sabir Akhadov, Satya Valluri, Scott Sandre, Shixiong Zhu, Tom van Bussel, Venki Korukanti, Vitalii Li, Wenchen Fan, Xi Liang, Yaohua Zhao, Yuming Wang

Don't miss a new delta release

NewReleases is sending notifications on new releases.