Release notes
This is the 0.8 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.0.0 and 1.15.0rc1.
Change notes
-
GPU-friendly "unrolled" NUTS:
tfp.mcmc.NoUTurnSampler
- Open-source the unrolled implementation of the No U-Turn Sampler.
- Switch back to original U turn criteria in Hoffman & Gelman 2014.
- Bug fix in Unrolled NUTS to make sure it does not lose shape for event_shape=1.
- Bug fix of U turn check in Unrolled NUTS at the tree extension.
- Refactor U turn check in Unrolled NUTS.
- Fix dynamic shape bug in Unrolled NUTS.
- Move NUTS unrolled into mcmc, with additional clean up.
- Make sure the unrolled NUTS sampler handle scalar target_log_probs correctly.
- Change implementation of check U turn to using a tf.while_loop in unrolled NUTS.
- Implement multinomial sampling across tree (instead of Slice sampling) in unrolled NUTS.
- Expose additional diagnostics in
previous_kernel_results
in unrolled NUTS so that it works with*_step_size_adaptation
.
-
MCMC
- Modify the shape handling in DualAveragingStepSizeAdaptation so that it works with non-scalar event_shape.
- support structured samples in
tfp.monte_carlo.expectation
. - Minor fix for docstring example in leapfrog_integrator
-
VI
- Add utilities for fitting variational distributions.
- Improve Csiszar divergence support for joint variational distributions.
- ensure that joint distributions are correctly recognized as reparameterizable by
monte_carlo_csiszar_f_divergence
. - Rename
monte_carlo_csiszar_f_divergence
tomonte_carlo_variational_loss
. - Refactor tfp.vi.csiszar_vimco_helper to expose useful leave-one-out statistical tools.
-
Distributions
- Added
tfp.distributions.GeneralizedPareto
- Multinomial and DirichletMultinomial samplers are now reproducible.
- HMM samples are now reproducible.
- Cleaning up unneeded conversion to tensor in quantile().
- Added support for dynamic
num_steps
inHiddenMarkovModel
- Added implementation of quantile() for exponential distributions.
- Fix entropy of Categorical distribution when logits contains -inf.
- Annotate float-valued Deterministic distributions as reparameterized.
- Establish patterns which ensure that TFP objects are "GradientTape Safe."
- "GradientTape-safe" distributions: FiniteDiscrete, VonMises, Binomial, Dirichlet, Multinomial, DirichletMultinomial, Categorical, Deterministic
- Add
tfp.util.DeferredTensor
to delay Tensor operations ontf.Variable
s (also works fortf.Tensor
s). - Add
probs_parameter
,logits_parameter
member functions to Categorical-like distributions. In the future users should use these new functions rather thanprobs
/logits
properties because the properties might beNone
if that's how the distribution was parameterized.
- Added
-
Bijectors
- Add
log_scale
parameter to AffineScalar bijector. - Added
tfp.bijectors.RationalQuadraticSpline
. - Add SoftFloor bijector. (Note: Known inverse bug WIP.)
- Allow using an arbitrary bijector in RealNVP for the coupling.
- Allow using an arbitrary bijector in MaskedAutoregressiveFlow for the coupling.
- Add
-
Experimental auto-batching system:
tfp.experimental.auto_batching
- Open-source the program-counter-based auto-batching system.
- Added tfp.experimental.auto_batching, an experimental system to recover batch parallelism across recursive function invocations.
- Autobatched NUTS supports batching across consecutive trajectories.
- Add support for field references to autobatching.
- Increase the amount of Python syntax that "just works" in autobatched functions.
- pop-push fusion optimization in the autobatching system (also recently did tail-call optimization but forgot to add a relnote).
- Open-source the auto-batched implementation of the No U-Turn Sampler.
-
STS
- Support TF2/Eager-mode fitting of STS models, and deprecate
build_factored_variational_loss
. - Use dual averaging step size adaptation for STS HMC fitting.
- Add support for imputing missing values in structural time series models.
- Standardize parameter scales during STS inference.
- Support TF2/Eager-mode fitting of STS models, and deprecate
-
Layers
- Add WeightNorm layer wrapper.
- Fix gradients flowing through variables in the old style variational layers.
tf.keras.model.save_model
andmodel.save
now defaults to saving a TensorFlow SavedModel.
-
Stats/Math
- Add calibration metrics to tfp.stats.
- Add output_gradients argument to value_and_gradient.
- Add Geyer initial positive sequence truncation criterion to tfp.mcmc.effective_sample_size.
- Resolve shape inconsistencies in PSDKernels API.
- Support dynamic-shaped results in
tfp.math.minimize
. - ODE: Implement the Adjoint Method for gradients with respect to the initial state.
Huge thanks to all the contributors to this release!
- Alexey Radul
- Anudhyan Boral
- Arthur Lui
- Brian Patton
- Christopher Suter
- Colin Carroll
- Dan Moldovan
- Dave Moore
- Edward Loper
- Emily Fertig
- Gaurav Jain
- Ian Langmore
- Igor Ganichev
- Jacob Burnim
- Jeff Pollock
- Joshua V. Dillon
- Junpeng Lao
- Katherine Wu
- Mark Daoust
- Matthieu Coquet
- Parsiad Azimzadeh
- Pavel Sountsov
- Pavithra Vijay
- PJ Trainor
- prabhu prakash kagitha
- prakashkagitha
- Reed Wanderman-Milne
- refraction-ray
- Rif A. Saurous
- RJ Skerry-Ryan
- Saurabh Saxena
- Sharad Vikram
- Sigrid Keydana
- skeydan
- Srinivas Vasudevan
- Yash Katariya
- Zachary Nado