github snowflakedb/snowpark-python v1.34.0
Release

latest releases: v1.38.0, v1.37.0, v1.36.0...
one month ago

1.34.0 (2025-07-15)

Snowpark Python API Updates

New Features

  • Added a new option TRY_CAST to DataFrameReader. When TRY_CAST is True columns are wrapped in a TRY_CAST statement rather than a hard cast when loading data.
  • Added a new option USE_RELAXED_TYPES to the INFER_SCHEMA_OPTIONS of DataFrameReader. When set to True this option casts all strings to max length strings and all numeric types to DoubleType.
  • Added debuggability improvements to eagerly validate dataframe schema metadata. Enable it using snowflake.snowpark.context.configure_development_features().
  • Added a new function snowflake.snowpark.dataframe.map_in_pandas that allows users map a function across a dataframe. The mapping function takes an iterator of pandas dataframes as input and provides one as output.
  • Added a ttl cache to describe queries. Repeated queries in a 15 second interval will use the cached value rather than requery Snowflake.
  • Added a parameter fetch_with_process to DataFrameReader.dbapi (PrPr) to enable multiprocessing for parallel data fetching in
    local ingestion. By default, local ingestion uses multithreading. Multiprocessing may improve performance for CPU-bound tasks like Parquet file generation.
  • Added a new function snowflake.snowpark.functions.model that allows users to call methods of a model.

Improvements

  • Added support for row validation using XSD schema using rowValidationXSDPath option when reading XML files with a row tag using rowTag option.
  • Improved SQL generation for session.table().sample() to generate a flat SQL statement.
  • Added support for complex column expression as input for functions.explode.
  • Added debuggability improvements to show which Python lines an SQL compilation error corresponds to. Enable it using snowflake.snowpark.context.configure_development_features(). This feature also depends on AST collection to be enabled in the session which can be done using session.ast_enabled = True.
  • Set enforce_ordering=True when calling to_snowpark_pandas() from a snowpark dataframe containing DML/DDL queries instead of throwing a NotImplementedError.

Bug Fixes

  • Fixed a bug caused by redundant validation when creating an iceberg table.
  • Fixed a bug in DataFrameReader.dbapi (PrPr) where closing the cursor or connection could unexpectedly raise an error and terminate the program.
  • Fixed ambiguous column errors when using table functions in DataFrame.select() that have output columns matching the input DataFrame's columns. This improvement works when dataframe columns are provided as Column objects.
  • Fixed a bug where having a NULL in a column with DecimalTypes would cast the column to FloatTypes instead and lead to precision loss.

Snowpark Local Testing Updates

Bug Fixes

  • Fixed a bug when processing windowed functions that lead to incorrect indexing in results.
  • When a scalar numeric is passed to fillna we will ignore non-numeric columns instead of producing an error.

Snowpark pandas API Updates

New Features

  • Added support for DataFrame.to_excel and Series.to_excel.
  • Added support for pd.read_feather, pd.read_orc, and pd.read_stata.
  • Added support for pd.explain_switch() to return debugging information on hybrid execution decisions.
  • Support pd.read_snowflake when the global modin backend is Pandas.
  • Added support for pd.to_dynamic_table, pd.to_iceberg, and pd.to_view.

Improvements

  • Added modin telemetry on API calls and hybrid engine switches.
  • Show more helpful error messages to Snowflake Notebook users when the modin or pandas version does not match our requirements.
  • Added a data type guard to the cost functions for hybrid execution mode (PrPr) which checks for data type compatibility.
  • Added automatic switching to the pandas backend in hybrid execution mode (PrPr) for many methods that are not directly implemented in Snowpark pandas.
  • Set the 'type' and other standard fields for Snowpark pandas telemetry.

Dependency Updates

  • Added tqdm and ipywidgets as dependencies so that progress bars appear when switching between modin backends.
  • Updated the supported modin versions to >=0.33.0 and <0.35.0 (was previously >= 0.32.0 and <0.34.0).

Bug Fixes

  • Fixed a bug in hybrid execution mode (PrPr) where certain Series operations would raise TypeError: numpy.ndarray object is not callable.
  • Fixed a bug in hybrid execution mode (PrPr) where calling numpy operations like np.where on modin objects with the Pandas backend would raise an AttributeError. This fix requires modin version 0.34.0 or newer.
  • Fixed issue in df.melt where the resulting values have an additional suffix applied.

Don't miss a new snowpark-python release

NewReleases is sending notifications on new releases.