1.5.0 (2023-03-20)
Features Added
- Added support for
tags
on Compute Resources. - Added support for promoting data asset from a workspace to a registry
- Added support for registering named asset from job output or node output by specifying name and version settings.
- Added support for data binding on outputs inside dynamic arguments for dsl pipeline
- Added support for serverless compute in pipeline, command, automl and sweep job
- Added support for
job_tier
andpriority
in standalone job - Added support for passing
locations
via command function and set it toJobResourceConfiguration.locations
- Added support for modifying SSH key values after creation on Compute Resources.
- Added WorkspaceConnection types
s3
,snowflake
,azure_sql_db
,azure_synapse_analytics
,azure_my_sql_db
,azure_postgres_db
- Added WorkspaceConnection auth type
access_key
fors3
- Added DataImport class and DataOperations.import_data.
- Added DataOperations.list_materialization_status - list status of data import jobs that create asset versions via asset name.
Bugs Fixed
- Fix experiment name wrongly set to 'Default' when schedule existing job.
- Error message improvement when a local path fails to match with data asset type.
- Error message improvement when an asset does not exist in a registry
- Fix an issue when submit spark pipeline job with referring a registered component
- Fix an issue that prevented Job.download from downloading the output of a BatchJob
Other Changes
- Added dependency on
azure-mgmt-resource
- Added dependency on
azure-mgmt-resourcegraph
- Added dependency on
opencensus-ext-azure<2.0.0
- Update job types to use MFE Dec preview rest objects.
- Added classifiers for Python version 3.11.
- Added warning for reserved keywords in IO names in pipeline job nodes.
- Added telemetry logging for SDK Jupyter Notebook scenarios with opt-out option (see README.md)