github pymc-devs/pymc v4.0.0
PyMC 4.0.0

latest releases: v5.16.2, v5.16.1, v5.16.0...
2 years ago

If you want a description of the highlights of this release, check out the release announcement on our new website.
Feel free to read it, print it out, and give it to people on the street -- because everybody has to know PyMC 4.0 is officially out 🍾

Do not miss 🚨

  • ⚠️ The project was renamed to "PyMC". Now the library is installed as "pip install pymc" and imported like import pymc as pm. See this migration guide for more details.
  • ⚠️ Theano-PyMC has been replaced with Aesara, so all external references to theano and tt need to be replaced with aesara and at, respectively (see 4471).
  • ⚠️ Support for JAX and JAX samplers, also allows sampling on GPUs. This benchmark shows speed-ups of up to 11x.
  • ⚠️ The GLM submodule was removed, please use Bambi instead.
  • ⚠️ PyMC now requires SciPy version >= 1.4.1 (see #4857).

v3 features not yet working in v4 ⏳

⚠️ We plan to get these working again, but at this point their inner workings have not been refactored.

  • MvNormalRandomWalk, MvStudentTRandomWalk, GARCH11 and EulerMaruyama distributions (see #4642)
  • Nested Mixture distributions (see #5533)
  • pm.sample_posterior_predictive_w (see #4807)
  • Partially observed Multivariate distributions (see #5260)

New features 🥳

  • Distributions:

    • Univariate censored distributions are now available via pm.Censored. #5169

    • The CAR distribution has been added to allow for use of conditional autoregressions which often are used in spatial and network models.

    • Added a logcdf implementation for the Kumaraswamy distribution (see #4706).

    • The OrderedMultinomial distribution has been added for use on ordinal data which are aggregated by trial, like multinomial observations, whereas OrderedLogistic only accepts ordinal data in a disaggregated format, like categorical observations (see #4773).

    • The Polya-Gamma distribution has been added (see #4531). To make use of this distribution, the polyagamma>=1.3.1 library must be installed and available in the user's environment.

    • pm.DensityDist can now accept an optional logcdf keyword argument to pass in a function to compute the cummulative density function of the distribution (see 5026).

    • pm.DensityDist can now accept an optional moment keyword argument to pass in a function to compute the moment of the distribution (see 5026).

    • Added an alternative parametrization, logit_p to pm.Binomial and pm.Categorical distributions (see 5637).

  • Model dimensions:

    • The dimensionality of model variables can now be parametrized through either of shape or dims (see #4696):
      • With shape the length of dimensions must be given numerically or as scalar Aesara Variables. Numeric entries in shape restrict the model variable to the exact length and re-sizing is no longer possible.
      • dims keeps model variables re-sizeable (for example through pm.Data) and leads to well defined coordinates in InferenceData objects.
      • An Ellipsis (...) in the last position of shape or dims can be used as short-hand notation for implied dimensions.
    • New features for pm.Data containers:
      • With pm.Data(..., mutable=False), or by using pm.ConstantData() one can now create TensorConstant data variables. These can be more performant and compatible in situations where a variable doesn't need to be changed via pm.set_data(). See #5295. If you do need to change the variable, use pm.Data(..., mutable=True), or pm.MutableData().
      • New named dimensions can be introduced to the model via pm.Data(..., dims=...). For mutable data variables (see above) the lengths of these dimensions are symbolic, so they can be re-sized via pm.set_data().
      • pm.Data now passes additional kwargs to aesara.shared/at.as_tensor. #5098.
    • The length of dims in the model is now tracked symbolically through Model.dim_lengths (see #4625).
  • Sampling:

    • ⚠️ Random seeding behavior changed (see #5787)!
      • Sampling results will differ from those of v3 when passing the same random_seed as before. They will be consistent across subsequent v4 releases unless mentioned otherwise.
      • Sampling functions no longer respect user-specified global seeding! Always pass random_seed to ensure reproducible behavior.
      • random_seed now accepts RandomState and Generators besides integers.
    • A small change to the mass matrix tuning methods jitter+adapt_diag (the default) and adapt_diag improves performance early on during tuning for some models. #5004
    • New experimental mass matrix tuning method jitter+adapt_diag_grad. #5004
    • Support for samplers written in JAX:
      • Adding support for numpyro's NUTS sampler via pymc.sampling_jax.sample_numpyro_nuts()
      • Adding support for blackjax's NUTS sampler via pymc.sampling_jax.sample_blackjax_nuts() (see #5477)
      • pymc.sampling_jax samplers support log_likelihood, observed_data, and sample_stats in returned InferenceData object (see #5189)
      • Adding support for pm.Deterministic in pymc.sampling_jax (see #5182)
  • Miscellaneous:

    • The new pm.find_constrained_prior function can be used to find optimized prior parameters of a distribution under some
      constraints (e.g lower and upper bound). See #5231.
    • Nested models now inherit the parent model's coordinates. #5344
    • softmax and log_softmax functions added to math module (see #5279).
    • Added the low level compile_forward_sampling_function method to compile the aesara function responsible for generating forward samples (see #5759).

Expected breaking changes 💔

  • pm.sample(return_inferencedata=True) is now the default (see #4744).
  • ArviZ plots and stats wrappers were removed. The functions are now just available by their original names (see #4549 and 3.11.2 release notes).
  • pm.sample_posterior_predictive(vars=...) kwarg was removed in favor of var_names (see #4343).
  • ElemwiseCategorical step method was removed (see #4701)
  • LKJCholeskyCov's compute_corr keyword argument is now set to True by default (see#5382)
  • Alternative sd keyword argument has been removed from all distributions. sigma should be used instead (see #5583).

Read on if you're a developer. Or curious. Or both.

Unexpected breaking changes (action needed) 😲

Very important ⚠️

  • pm.Bound interface no longer accepts a callable class as argument, instead it requires an instantiated distribution (created via the .dist() API) to be passed as an argument. In addition, Bound no longer returns a class instance but works as a normal PyMC distribution. Finally, it is no longer possible to do predictive random sampling from Bounded variables. Please, consult the new documentation for details on how to use Bounded variables (see 4815).
  • BART has received various updates (5091, 5177, 5229, 4914) but was removed from the main package in #5566. It is now available from pymc-experimental.
  • Removed AR1. AR of order 1 should be used instead. (see 5734).
  • The pm.EllipticalSlice sampler was removed (see #5756).
  • BaseStochasticGradient was removed (see #5630)
  • pm.Distribution(...).logp(x) is now pm.logp(pm.Distribution(...), x).
  • pm.Distribution(...).logcdf(x) is now pm.logcdf(pm.Distribution(...), x).
  • pm.Distribution(...).random(size=x) is now pm.draw(pm.Distribution(...), draws=x).
  • pm.draw_values(...) and pm.generate_samples(...) were removed.
  • pm.fast_sample_posterior_predictive was removed.
  • pm.sample_prior_predictive, pm.sample_posterior_predictive and pm.sample_posterior_predictive_w now return an InferenceData object by default, instead of a dictionary (see #5073).
  • pm.sample_prior_predictive no longer returns transformed variable values by default. Pass them by name in var_names if you want to obtain these draws (see 4769).
  • pm.sample(trace=...) no longer accepts MultiTrace or len(.) > 0 traces (see 5019#).
  • Setting of initial values:
    • Setting initial values through pm.Distribution(testval=...) is now pm.Distribution(initval=...).
    • Model.update_start_values(...) was removed. Initial values can be set in the Model.initial_values dictionary directly.
    • Test values can no longer be set through pm.Distribution(testval=...) and must be assigned manually.
  • transforms module is no longer accessible at the root level. It is accessible at pymc.distributions.transforms (see#5347).
  • logp, dlogp, and d2logp and nojac variations were removed. Use Model.compile_logp, compile_dlgop and compile_d2logp with jacobian keyword instead.
  • pm.DensityDist no longer accepts the logp as its first position argument. It is now an optional keyword argument. If you pass a callable as the first positional argument, a TypeError will be raised (see 5026).
  • pm.DensityDist now accepts distribution parameters as positional arguments. Passing them as a dictionary in the observed keyword argument is no longer supported and will raise an error (see 5026).
  • The signature of the logp and random functions that can be passed into a pm.DensityDist has been changed (see 5026).

Important:

  • Signature and default parameters changed for several distributions:

    • pm.StudentT now requires either sigma or lam as kwarg (see #5628)
    • pm.StudentT now requires nu to be specified (no longer defaults to 1) (see #5628)
    • pm.AsymmetricLaplace positional arguments re-ordered (see #5628)
    • pm.AsymmetricLaplace now requires mu to be specified (no longer defaults to 0) (see #5628)
    • ZeroInflatedPoisson theta parameter was renamed to mu (see #5584).
    • pm.GaussianRandomWalk initial distribution defaults to zero-centered normal with sigma=100 instead of flat (see#5779)
    • pm.AR initial distribution defaults to unit normal instead of flat (see#5779)
  • logpt, logpt_sum, logp_elemwiset and nojac variations were removed. Use Model.logpt(jacobian=True/False, sum=True/False) instead.

  • dlogp_nojact and d2logp_nojact were removed. Use Model.dlogpt and d2logpt with jacobian=False instead.

  • model.makefn is now called Model.compile_fn, and model.fn was removed.

  • Methods starting with fast_*, such as Model.fast_logp, were removed. Same applies to PointFunc classes

  • Model(model=...) kwarg was removed

  • Model(theano_config=...) kwarg was removed

  • Model.size property was removed (use Model.ndim instead).

  • dims and coords handling:

    • Model.RV_dims and Model.coords are now read-only properties. To modify the coords dictionary use Model.add_coord.
    • dims or coordinate values that are None will be auto-completed (see #4625).
    • Coordinate values passed to Model.add_coord are always converted to tuples (see #5061).
  • Transform.forward and Transform.backward signatures changed.

  • Changes to the Gaussian Process (GP) submodule (see 5055):

    • The gp.prior(..., shape=...) kwarg was renamed to size.
    • Multiple methods including gp.prior now require explicit kwargs.
    • For all implementations, gp.Latent, gp.Marginal etc., cov_func and mean_func are required kwargs.
    • In Windows test conda environment the mkl version is fixed to verison 2020.4, and mkl-service is fixed to 2.3.0. This was required for gp.MarginalKron to function properly.
    • gp.MvStudentT uses rotated samples from StudentT directly now, instead of sampling from pm.Chi2 and then from pm.Normal.
    • The "jitter" parameter, or the diagonal noise term added to Gram matrices such that the Cholesky is numerically stable, is now exposed to the user instead of hard-coded. See the function gp.util.stabilize.
    • The is_observed arguement for gp.Marginal* implementations has been deprecated.
    • In the gp.utils file, the kmeans_inducing_points function now passes through kmeans_kwargs to scipy's k-means function.
    • The function replace_with_values function has been added to gp.utils.
    • MarginalSparse has been renamed MarginalApprox.
  • Removed MixtureSameFamily. Mixture is now capable of handling batched multivariate components (see #5438).

Documentation

  • Switched to the pydata-sphinx-theme
  • Updated our documentation tooling to use MyST, MyST-NB, sphinx-design, notfound.extension,
    sphinx-copybutton and sphinx-remove-toctrees.
  • Separated the builds of the example notebooks and of the versioned docs.
  • Restructured the documentation to facilitate learning paths
  • Updated API docs to document objects at the path users should use to import them

Maintenance

  • ⚠️ Fixed old-time bug in Slice sampler that resulted in biased samples (see #5816).
  • Removed float128 dtype support (see #4514).
  • Logp method of Uniform and DiscreteUniform no longer depends on pymc.distributions.dist_math.bound for proper evaluation (see #4541).
  • We now include cloudpickle as a required dependency, and no longer depend on dill (see #4858).
  • The incomplete_beta function in pymc.distributions.dist_math was replaced by aesara.tensor.betainc (see 4857).
  • math.log1mexp and math.log1mexp_numpy will expect negative inputs in the future. A FutureWarning is now raised unless negative_input=True is set (see #4860).
  • Changed name of Lognormal distribution to LogNormal to harmonize CamelCase usage for distribution names.
  • Attempt to iterate over MultiTrace will raise NotImplementedError.
  • Removed silent normalisation of p parameters in Categorical and Multinomial distributions (see #5370).

Don't miss a new pymc release

NewReleases is sending notifications on new releases.