dbt-spark 1.3.0-rc1 - September 28, 2022
Features
- merge exclude columns for spark models (#5260, #390)
- Array macros (#453, #454)
- implement testing for type_boolean in spark (#470, #471)
- Support job cluster in notebook submission method, remove requirement for user for python model submission (#444, #467)
Fixes
- python incremental model tmp table using correct schema (#441, #445)
- change to get_columns_in_relation to fix cache inconsistencies to fix cache issues in incremental models causing failure on on_schema_change (#447, #451)
Under the Hood
- Submit python model with Command API by default. Adjusted run name (#424, #442)
- Better interface for python submission (#452, #452)
- ignore mypy typing issues (#461, #462)
- Enable Pandas and Pandas-on-Spark DataFrames for dbt python models (#468, #469)
- Convert df to pyspark DataFrame if it is koalas before writing (#473, #474)
Dependency
- Bump pyodbc from 4.0.32 to 4.0.34 (#417, #459)
- Bump black from 22.3.0 to 22.8.0 (#417, #458)
- Update click requirement from ~=8.0.4 to ~=8.1.3 (#417, #457)
- Bump mypy from 0.950 to 0.971 (#417, #456)
- Bump thrift-sasl from 0.4.1 to 0.4.3 (#417, #455)
Contributors
- @chamini2 (#469)
- @colin-rogers-dbt (#462)
- @dave-connors-3 (#390)
- @dbeatty10 (#454, #469, #474)
- @graciegoheen (#454)
- @jpmmcneill (#471)
- @ueshin (#474)