Release 2.14.0
Tensorflow
Breaking Changes
-
Support for Python 3.8 has been removed starting with TF 2.14. The TensorFlow 2.13.1 patch release will still have Python 3.8 support.
-
tf.Tensor
- The class hierarchy for
tf.Tensor
has changed, and there are now explicitEagerTensor
andSymbolicTensor
classes for eager and tf.function respectively. Users who relied on the exact type of Tensor (e.g.type(t) == tf.Tensor
) will need to update their code to useisinstance(t, tf.Tensor)
. Thetf.is_symbolic_tensor
helper added in 2.13 may be used when it is necessary to determine if a value is specifically a symbolic tensor.
- The class hierarchy for
-
tf.compat.v1.Session
tf.compat.v1.Session.partial_run
andtf.compat.v1.Session.partial_run_setup
will be deprecated in the next release.
Known Caveats
tf.lite
- when converter flag "_experimenal_use_buffer_offset" is enabled, additional metadata is automatically excluded from the generated model. The behaviour is the same as "exclude_conversion_metadata" is set
- If the model is larger than 2GB, then we also require "exclude_conversion_metadata" flag to be set
Major Features and Improvements
-
The
tensorflow
pip package has a new, optional installation method for Linux that installs necessary Nvidia CUDA libraries through pip. As long as the Nvidia driver is already installed on the system, you may now runpip install tensorflow[and-cuda]
to install TensorFlow's Nvidia CUDA library dependencies in the Python environment. Aside from the Nvidia driver, no other pre-existing Nvidia CUDA packages are necessary. -
Enable JIT-compiled i64-indexed kernels on GPU for large tensors with more than 2**32 elements.
- Unary GPU kernels: Abs, Atanh, Acos, Acosh, Asin, Asinh, Atan, Cos, Cosh, Sin, Sinh, Tan, Tanh.
- Binary GPU kernels: AddV2, Sub, Div, DivNoNan, Mul, MulNoNan, FloorDiv, Equal, NotEqual, Greater, GreaterEqual, LessEqual, Less.
-
tf.lite
- Add experimental supports conversion of models that may be larger than 2GB before buffer deduplication
Bug Fixes and Other Changes
-
tf.py_function
andtf.numpy_function
can now be used as function decorators for clearer code:@tf.py_function(Tout=tf.float32) def my_fun(x): print("This always executes eagerly.") return x+1
-
tf.lite
- Strided_Slice now supports
UINT32
.
- Strided_Slice now supports
-
tf.config.experimental.enable_tensor_float_32_execution
- Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling
tf.config.experimental.enable_tensor_float_32_execution(False)
will cause TPUs to use float32 precision for such ops instead of bfloat16.
- Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling
-
tf.experimental.dtensor
- API changes for Relayout. Added a new API,
dtensor.relayout_like
, for relayouting a tensor according to the layout of another tensor. - Added
dtensor.get_default_mesh
, for retrieving the current default mesh under the dtensor context. - *fft* ops now support dtensors with any layout. Fixed bug in 'fft2d/fft3d', 'ifft2d/ifft3d', 'rfft2d/rfft3d', and 'irfft2d/irfft3d' for sharde input. Refer to this blog post for details.
- API changes for Relayout. Added a new API,
-
tf.experimental.strict_mode
- Added a new API,
strict_mode
, which converts all deprecation warnings into runtime errors with instructions on switching to a recommended substitute.
- Added a new API,
-
TensorFlow Debugger (tfdbg) CLI: ncurses-based CLI for tfdbg v1 was removed.
-
TensorFlow now supports C++ RTTI on mobile and Android. To enable this feature, pass the flag
--define=tf_force_rtti=true
to Bazel when building TensorFlow. This may be needed when linking TensorFlow into RTTI-enabled programs since mixing RTTI and non-RTTI code can cause ABI issues. -
tf.ones
,tf.zeros
,tf.fill
,tf.ones_like
,tf.zeros_like
now take an additional Layout argument that controls the output layout of their results. -
tf.nest
andtf.data
now support user defined classes implementing__tf_flatten__
and__tf_unflatten__
methods. See nest_util code examples
for an example. -
TensorFlow IO support is now available for Apple Silicon packages.
-
Refactor CpuExecutable to propagate LLVM errors.
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Major Features and Improvements
tf.keras
Model.compile
now supportsteps_per_execution='auto'
as a parameter, allowing automatic tuning of steps per execution duringModel.fit
,
Model.predict
, andModel.evaluate
for a significant performance boost.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aakar Dwivedi, Adrian Popescu, ag.ramesh, Akhil Goel, Albert Zeyer, Alex Rosen, Alexey Vishnyakov, Andrew Goodbody, angerson, Ashiq Imran, Ayan Moitra, Ben Barsdell, Bhavani Subramanian, Boian Petkantchin, BrianWieder, Chris Mc, cloudhan, Connor Flanagan, Daniel Lang, Daniel Yudelevich, Darya Parygina, David Korczynski, David Svantesson, dingyuqing05, Dragan Mladjenovic, dskkato, Eli Kobrin, Erick Ochoa, Erik Schultheis, Frédéric Bastien, gaikwadrahul8, Gauri1 Deshpande, guozhong.zhuang, H. Vetinari, Isaac Cilia Attard, Jake Hall, Jason Furmanek, Jerry Ge, Jinzhe Zeng, JJ, johnnkp, Jonathan Albrecht, jongkweh, justkw, Kanvi Khanna, kikoxia, Koan-Sin Tan, Kun-Lu, ltsai1, Lu Teng, luliyucoordinate, Mahmoud Abuzaina, mdfaijul, Milos Puzovic, Nathan Luehr, Om Thakkar, pateldeev, Peng Sun, Philipp Hack, pjpratik, Poliorcetics, rahulbatra85, rangjiaheng, Renato Arantes, Robert Kalmar, roho, Rylan Justice, Sachin Muradi, samypr100, Saoirse Stewart, Shanbin Ke, Shivam Mishra, shuw, Song Ziming, Stephan Hartmann, Sulav, sushreebarsa, T Coxon, Tai Ly, talyz, Thibaut Goetghebuer-Planchon, Thomas Preud'Homme, tilakrayal, Tirumalesh, Tj Xu, Tom Allsop, Trevor Morris, Varghese, Jojimon, Wen Chen, Yaohui Liu, Yimei Sun, Zhoulong Jiang, Zhoulong, Jiang