New Functionalities
- Infer mixed Parquet schemas on wr.s3.read_parquet_metadata and wr.s3.store_parquet_metadata #195
- Support to add new columns on wr.s3.to_parquet and wr.s3.store_parquet_metadata [TUTORIAL] #232
Enhancements
- Now wr.s3.delete_objects raises exception for not deleted objects #237
- User-friendly exceptions on wr.athena.read_sql_query and wr.athena.read_sql_table #239
Bug Fix
- Fix issue to use wr.s3.store_parquet_metadata on non-partitioned datasets #231
- Fix bug on wr.s3.read_json using chunksize #235
s3fs
version bumped #236- wr.s3.to_parquet single file does not sanitize column names fixed #240
Thanks
We thank the following contributors/users for their work on this release:
@mrshu, @bryanyang0528, @JPFrancoia, @jaidisido, @qemtek, @dwbelliston, @mbiemann, @parasml, @BrainMonkey, @hyperloglog, @igorborgest.
P.S. Lambda Layer's zip-file and Glue's wheel/egg are available below. Just upload it and run!
P.P.S. AWS Data Wrangler counts on compiled dependencies (C/C++) so there is no support for Glue PySpark by now (Only Glue Python Shell).