Release 2.12.0
TensorFlow
Breaking Changes
-
Build, Compilation and Packaging
- Removed redundant packages
tensorflow-gpu
andtf-nightly-gpu
. These packages were removed and replaced with packages that direct users to switch totensorflow
ortf-nightly
respectively. Since TensorFlow 2.1, the only difference between these two sets of packages was their names, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.
- Removed redundant packages
-
tf.function
:tf.function
now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on. This change may break code where the function signature is malformed, but was ignored previously, such as:- Using
functools.wraps
on a function with different signature - Using
functools.partial
with an invalidtf.function
input
- Using
tf.function
now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.- Parameterless
tf.function
s are assumed to have an emptyinput_signature
instead of an undefined one even if theinput_signature
is unspecified. tf.types.experimental.TraceType
now requires an additionalplaceholder_value
method to be defined.tf.function
now traces with placeholder values generated by TraceType instead of the value itself.
-
Experimental APIs
tf.config.experimental.enable_mlir_graph_optimization
andtf.config.experimental.disable_mlir_graph_optimization
were removed.
Major Features and Improvements
-
Support for Python 3.11 has been added.
-
Support for Python 3.7 has been removed. We are not releasing any more patches for Python 3.7.
-
tf.lite
:- Add 16-bit float type support for built-in op
fill
. - Transpose now supports 6D tensors.
- Float LSTM now supports diagonal recurrent tensors: https://arxiv.org/abs/1903.08023
- Add 16-bit float type support for built-in op
-
tf.experimental.dtensor
:- Coordination service now works with
dtensor.initialize_accelerator_system
, and enabled by default. - Add
tf.experimental.dtensor.is_dtensor
to check if a tensor is a DTensor instance.
- Coordination service now works with
-
tf.data
:- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
experimental_symbolic_checkpoint
option oftf.data.Options()
. - Added a new
rerandomize_each_iteration
argument for thetf.data.Dataset.random()
operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). Ifseed
is set andrerandomize_each_iteration=True
, therandom()
operation will produce a different (deterministic) sequence of numbers every epoch. - Added a new
rerandomize_each_iteration
argument for thetf.data.Dataset.sample_from_datasets()
operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. Ifseed
is set andrerandomize_each_iteration=True
, thesample_from_datasets()
operation will use a different (deterministic) sequence of numbers every epoch.
- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
-
tf.test
:- Added
tf.test.experimental.sync_devices
, which is useful for accurately measuring performance in benchmarks.
- Added
-
tf.experimental.dtensor
:- Added experimental support to ReduceScatter fuse on GPU (NCCL).
Bug Fixes and Other Changes
tf.SavedModel
:- Introduced new class
tf.saved_model.experimental.Fingerprint
that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details. - Introduced API
tf.saved_model.experimental.read_fingerprint(export_dir)
for reading the fingerprint of a SavedModel.
- Introduced new class
tf.random
- Added non-experimental aliases for
tf.random.split
andtf.random.fold_in
, the experimental endpoints are still available so no code changes are necessary.
- Added non-experimental aliases for
tf.experimental.ExtensionType
- Added function
experimental.extension_type.as_dict()
, which converts an instance oftf.experimental.ExtensionType
to adict
representation.
- Added function
stream_executor
- Top level
stream_executor
directory has been deleted, users should use equivalent headers and targets undercompiler/xla/stream_executor
.
- Top level
tf.nn
- Added
tf.nn.experimental.general_dropout
, which is similar totf.random.experimental.stateless_dropout
but accepts a custom sampler function.
- Added
tf.types.experimental.GenericFunction
- The
experimental_get_compiler_ir
method supports tf.TensorSpec compilation arguments.
- The
tf.config.experimental.mlir_bridge_rollout
- Removed enums
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED
andMLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED
which are no longer used by the tf2xla bridge
- Removed enums
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
tf.keras
:
- Moved all saving-related utilities to a new namespace,
keras.saving
, for example:keras.saving.load_model
,keras.saving.save_model
,keras.saving.custom_object_scope
,keras.saving.get_custom_objects
,keras.saving.register_keras_serializable
,keras.saving.get_registered_name
andkeras.saving.get_registered_object
. The previous API locations (inkeras.utils
andkeras.models
) will be available indefinitely, but we recommend you update your code to point to the new API locations. - Improvements and fixes in Keras loss masking:
- Whether you represent a ragged tensor as a
tf.RaggedTensor
or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask.
- Whether you represent a ragged tensor as a
- If you use masked losses with Keras the loss values may be different in TensorFlow
2.12
compared to previous versions. - In cases where the mask was previously ignored, you will now get an error if you pass a mask with an incompatible shape.
Major Features and Improvements
tf.keras
:
- The new Keras model saving format (
.keras
) is available. You can start using it viamodel.save(f"{fname}.keras", save_format="keras_v3")
. In the future it will become the default for all files with the.keras
extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Pythonlambdas
are disallowed at loading time. If you want to uselambdas
, you can passsafe_mode=False
to the loading method (only do this if you trust the source of the model). - Added a
model.export(filepath)
API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving). - Added
keras.export.ExportArchive
class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based ontf.function
tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving. - Added utility
tf.keras.utils.FeatureSpace
, a one-stop shop for structured data preprocessing and encoding. - Added
tf.SparseTensor
input support totf.keras.layers.Embedding
layer. The layer now accepts a new boolean argumentsparse
. Ifsparse
is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False. - Added
jit_compile
as a settable property totf.keras.Model
. - Added
synchronized
optional parameter tolayers.BatchNormalization
. - Added deprecation warning to
layers.experimental.SyncBatchNormalization
and suggested to uselayers.BatchNormalization
withsynchronized=True
instead. - Updated
tf.keras.layers.BatchNormalization
to support masking of the inputs (mask
argument) when computing the mean and variance. - Add
tf.keras.layers.Identity
, a placeholder pass-through layer. - Add
show_trainable
option totf.keras.utils.model_to_dot
to display layer trainable status in model plots. - Add ability to save a
tf.keras.utils.FeatureSpace
object, viafeature_space.save("myfeaturespace.keras")
, and reload it viafeature_space = tf.keras.models.load_model("myfeaturespace.keras")
. - Added utility
tf.keras.utils.to_ordinal
to convert class vector to ordinal regression / classification matrix.
Bug Fixes and Other Changes
- N/A
Security
- Fixes an FPE in TFLite in conv kernel CVE-2023-27579
- Fixes a double free in Fractional(Max/Avg)Pool CVE-2023-25801
- Fixes a null dereference on ParallelConcat with XLA CVE-2023-25676
- Fixes a segfault in Bincount with XLA CVE-2023-25675
- Fixes an NPE in RandomShuffle with XLA enable CVE-2023-25674
- Fixes an FPE in TensorListSplit with XLA CVE-2023-25673
- Fixes segmentation fault in tfg-translate CVE-2023-25671
- Fixes an NPE in QuantizedMatMulWithBiasAndDequantize CVE-2023-25670
- Fixes an FPE in AvgPoolGrad with XLA CVE-2023-25669
- Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation CVE-2023-25668
- Fixes a segfault when opening multiframe gif CVE-2023-25667
- Fixes an NPE in SparseSparseMaximum CVE-2023-25665
- Fixes an FPE in AudioSpectrogram CVE-2023-25666
- Fixes a heap-buffer-overflow in AvgPoolGrad CVE-2023-25664
- Fixes a NPE in TensorArrayConcatV2 CVE-2023-25663
- Fixes a Integer overflow in EditDistance CVE-2023-25662
- Fixes a Seg fault in
tf.raw_ops.Print
CVE-2023-25660 - Fixes a OOB read in DynamicStitch CVE-2023-25659
- Fixes a OOB Read in GRUBlockCellGrad CVE-2023-25658
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard Hallerbäck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, Vinila S, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09