Breaking Changes
- Promote
NativeMixedPrecision
to defaultMixedPrecision
(and similar forLearner.to_fp16
); oldMixedPrecision
is now calledNonNativeMixedPrecision
(#3127)- Use the new
GradientClip
callback instead of theclip
parameter to use gradient clipping
- Use the new
- Adding a
Callback
which has the same name as an attribute no longer raises an exception (#3109) - RNN training now requires
RNNCallback
, but does not requireRNNRegularizer
;out
andraw_out
have moved toRNNRegularizer
(#3108)- Call
rnn_cbs
to get all callbacks needed for RNN training, optionally with regularization
- Call
- replace callback
run_after
withorder
; do not runafter
cbs on exception (#3101)
New Features
- Add
GradientClip
callback (#3107) - Make
Flatten
cast toTensorBase
to simplify type compatibility (#3106) - make flattened metrics compatible with all tensor subclasses (#3105)
- New class method
TensorBase.register_func
to register types for__torch_function__
(#3097) - new
dynamic
flag for controlling dynamic loss scaling inNativeMixedPrecision
(#3096) - remove need to call
to_native_fp32
beforepredict
; setskipped
in NativeMixedPrecision after NaN from dynamic loss scaling (#3095) - make native fp16 extensible with callbacks (#3094)
- Calculate correct
nf
increate_head
based onconcat_pool
(#3115) thanks to @muellerzr