GluonTS adds improved support for PyTorch-based models, new options for existing models, and general improvements to components and tooling.
Breaking changes
This release comes with a few breaking changes (but for good reasons). In particular, models trained and serialized prior to 0.7.0 may not be de-serializable using 0.7.0.
- Changes in model components and abstractions:
- #1256 and #1206 contain significant changes to the
GluonEstimator
abstract class, as well asInstanceSplitter
andInstanceSampler
implementations. You are affected by this change only if you implemented custom models based onGluonEstimator
. The change makes it easier to define (and understand, in case you're reading the code) how fixed-length instances are to be sampled from the original dataset for training or validation purposes. Furthermore, this PR breaks data transformation into more explicit "pre-processing" steps (deterministic ones, e.g. feature engineering) vs "iteration" steps (possibly random, e.g. random training instance sampling), so that acache_data
option is now available in thetrain
method to have the pre-processed data cached to memory, and be iterated quicker, whenever it fits. - #1233 splits normalized/unnormalized time features from
gluonts.time_features
into distinct types. - #1223 updates the interface of
ISSM
types, making it easier to define custom ones e.g. by having a custom set of seasonality patterns. Related changes toDeepStateEstimator
enable these customizations when defining a DeepState model.
- #1256 and #1206 contain significant changes to the
- Changes in
Trainer
:- #1178 removes the
input_names
argument from the__call__
method. Now the provided data loaders are expected to produce batches containing only the fields that the network being trained consumes. This can be easily obtained by transforming the dataset withSelectFields
.
- #1178 removes the
- Package structure reorg:
- #1183 puts all MXNet-dependant modules under
gluonts.mx
, with some exceptions (gluonts.model
andgluonts.nursery
). With the new structure, one is not forced to install MXNet unless they specifically require modules that depend on it. - #1402 makes the
Evaluator
class lighter, by moving the evaluation metrics togluonts.evaluation.metrics
instead of having them as static methods of the class.
- #1183 puts all MXNet-dependant modules under
New features
PyTorch support:
- PyTorchPredictor serde (#1086)
- Add equality operator for PytorchPredictor (#1190)
- Allow Pytorch predictor to be trained and loaded on different devices (#1244)
- Add distribution-based forecast types for torch, output layers, tests (#1266)
- Add more distribution output classes for PyTorch, add tests (#1272)
- Add pytorch tutorial notebook (#1289)
Distributions:
- Zero Inflated Poisson Distribution (#1130)
- GenPareto cdf and quantile functions (#1142)
- Added quantile function based on cdf bisection (#1145)
- Add AffineTransformedDistribution (#1161)
Models:
- add estimator/predictor types for autogluon tabular (#1105)
- Added thetaf method to the R predictor (#1281)
- Adding neural ode code for lotka volterra and corresponding notebook (#1023)
- Added lightgbm support for QRX/Rotbaum (#1365)
- Deepar imputation model (#1380)
- Initial commit for GMM-TPP (#1397)
Datasets & tooling: