1.36.0 (2025-08-05)
Snowpark Python API Updates
New Features
Session.create_dataframe
now accepts keyword arguments that are forwarded to the internal call toSession.write_pandas
orSession.write_arrow
when creating a DataFrame from a pandas DataFrame or a pyarrow Table.- Added new APIs for
AsyncJob
:AsyncJob.is_failed()
returns abool
indicating if a job has failed. Can be used in combination withAsyncJob.is_done()
to determine if a job is finished and errored.AsyncJob.status()
returns a string representing the current query status (e.g., "RUNNING", "SUCCESS", "FAILED_WITH_ERROR") for detailed monitoring without callingresult()
.
- Added a dataframe profiler. To use, you can call get_execution_profile() on your desired dataframe. This profiler reports the queries executed to evaluate a dataframe, and statistics about each of the query operators. Currently an experimental feature
- Added support for the following functions in
functions.py
:ai_sentiment
- Updated the interface for experimental feature
context.configure_development_features
. All development features are disabled by default unless explicitly enabled by the user.
Snowpark pandas API Updates
New Features
Improvements
- Hybrid execution row estimate improvements and a reduction of eager calls.
- Add a new configuration variable to control transfer costs out of Snowflake when using hybrid execution.
- Added support for creating permanent and immutable UDFs/UDTFs with
DataFrame/Series/GroupBy.apply
,map
, andtransform
by passing thesnowflake_udf_params
keyword argument. See documentation for details.
Bug Fixes
- Fixed an issue where Snowpark pandas plugin would unconditionally disable
AutoSwitchBackend
even when users had explicitly configured it via environment variables or programmatically.