pypi gluonts 0.11.0

latest releases: 0.16.0, 0.16.0rc1, 0.15.1...
2 years ago

Overview

Incremental training

Estimators are now re-trainable on new data, using the train_from method. This accepts a previously trained model (predictor), and new data to train on, and can greatly reduce training time if combined with early stopping. The feature is integrated with gluonts.shell-based SageMaker containers, and can be used by specifying the additional model channel to point to the output of a previous training job. More info in #2249.

New models

Two models are added in this release:

  • DeepVARHierarchicalEstimator, a hierarchical extension to DeepVAREstimator; learn more about how to use this in this tutorial.
  • DeepNPTSEstimator, a global extension to NPTS, where sampling probabilities are learned from data; learn more on how to use this estimator here.

Deprecated import paths and options

This release moves MXNet-based models from gluonts.model to gluonts.mx.model; the old import paths continue working in this release, but are deprecated and will be removed in the next release. For example, now the MXNet-based DeepAREstimator should be imported from gluonts.mx (or gluonts.mx.model.deepar).

We also removed deprecated options for learning rate reduction in the gluonts.mx.Trainer class: these can now be controlled via the LearningRateReduction callback.

Dataset splitting functionality (experimental)

We updated the functionality to split time series datasets (along the time axis) for training/validation/test purposes. Now this functionality can be easily accessed via the split function (from gluonts.dataset.split import split); learn more about this here.

This feature is experimental and subject to future changes.

Changelog

Breaking changes

New features / major improvements

Bug fixes / minor improvements

Documentation

Test / setup changes

Don't miss a new gluonts release

NewReleases is sending notifications on new releases.