pypi gluonts 0.7.0

latest releases: 0.16.0, 0.16.0rc1, 0.15.1...
3 years ago

GluonTS adds improved support for PyTorch-based models, new options for existing models, and general improvements to components and tooling.

Breaking changes

This release comes with a few breaking changes (but for good reasons). In particular, models trained and serialized prior to 0.7.0 may not be de-serializable using 0.7.0.

  • Changes in model components and abstractions:
    • #1256 and #1206 contain significant changes to the GluonEstimator abstract class, as well as InstanceSplitter and InstanceSampler implementations. You are affected by this change only if you implemented custom models based on GluonEstimator. The change makes it easier to define (and understand, in case you're reading the code) how fixed-length instances are to be sampled from the original dataset for training or validation purposes. Furthermore, this PR breaks data transformation into more explicit "pre-processing" steps (deterministic ones, e.g. feature engineering) vs "iteration" steps (possibly random, e.g. random training instance sampling), so that a cache_data option is now available in the train method to have the pre-processed data cached to memory, and be iterated quicker, whenever it fits.
    • #1233 splits normalized/unnormalized time features from gluonts.time_features into distinct types.
    • #1223 updates the interface of ISSM types, making it easier to define custom ones e.g. by having a custom set of seasonality patterns. Related changes to DeepStateEstimator enable these customizations when defining a DeepState model.
  • Changes in Trainer:
    • #1178 removes the input_names argument from the __call__ method. Now the provided data loaders are expected to produce batches containing only the fields that the network being trained consumes. This can be easily obtained by transforming the dataset with SelectFields.
  • Package structure reorg:
    • #1183 puts all MXNet-dependant modules under gluonts.mx, with some exceptions (gluonts.model and gluonts.nursery). With the new structure, one is not forced to install MXNet unless they specifically require modules that depend on it.
    • #1402 makes the Evaluator class lighter, by moving the evaluation metrics to gluonts.evaluation.metrics instead of having them as static methods of the class.

New features

PyTorch support:

  • PyTorchPredictor serde (#1086)
  • Add equality operator for PytorchPredictor (#1190)
  • Allow Pytorch predictor to be trained and loaded on different devices (#1244)
  • Add distribution-based forecast types for torch, output layers, tests (#1266)
  • Add more distribution output classes for PyTorch, add tests (#1272)
  • Add pytorch tutorial notebook (#1289)

Distributions:

  • Zero Inflated Poisson Distribution (#1130)
  • GenPareto cdf and quantile functions (#1142)
  • Added quantile function based on cdf bisection (#1145)
  • Add AffineTransformedDistribution (#1161)

Models:

  • add estimator/predictor types for autogluon tabular (#1105)
  • Added thetaf method to the R predictor (#1281)
  • Adding neural ode code for lotka volterra and corresponding notebook (#1023)
  • Added lightgbm support for QRX/Rotbaum (#1365)
  • Deepar imputation model (#1380)
  • Initial commit for GMM-TPP (#1397)

Datasets & tooling:

  • Implemented generate_rolling_datasets (#844)
  • Add a MinMax scaler (#1134)
  • introduce functional api for data generation recipes (#1153)
  • include m3 dataset (#1169)
  • Improvements for data generation (#1195)
  • Add most forecasters as entry points. (#1351)

Don't miss a new gluonts release

NewReleases is sending notifications on new releases.