Security Fixes & Hardening
This release introduces critical security hardening for model loading and saving, alongside improvements to the JAX backend metadata handling.
-
Disallow
TFSMLayerdeserialization insafe_mode(#22035)- Previously,
TFSMLayercould load external TensorFlow SavedModels during deserialization without respecting Kerassafe_mode. This could allow the execution of attacker-controlled graphs during model invocation. TFSMLayernow enforcessafe_modeby default. Deserialization viafrom_config()will raise aValueErrorunlesssafe_mode=Falseis explicitly passed orkeras.config.enable_unsafe_deserialization()is called.
- Previously,
-
Fix Denial of Service (DoS) in
KerasFileEditor(#21880)- Introduces validation for HDF5 dataset metadata to prevent "shape bomb" attacks.
- Hardens the
.kerasfile editor against malicious metadata that could cause dimension overflows or unbounded memory allocation (unbounded numpy allocation of multi-gigabyte tensors).
-
Block External Links in HDF5 files (#22057)
- Keras now explicitly disallows external links within HDF5 files during loading. This prevents potential security risks where a weight file could point to external system datasets.
- Includes improved verification for H5 Groups and Datasets to ensure they are local and valid.
Saving & Serialization
- Improved H5IOStore Integrity (#22057)
- Refactored
H5IOStoreandShardedH5IOStoreto remove unused, unverified methods. - Fixed key-ordering logic in sharded HDF5 stores to ensure consistent state loading across different environments.
- Refactored
Acknowledgments
Special thanks to the security researchers and contributors who reported these vulnerabilities and helped implement the fixes: @0xManan, @HyperPS, and @hertschuh.
Full Changelog: v3.12.0...v3.12.1