github FluxML/Flux.jl v0.11.0

latest releases: v0.16.5, v0.16.4, v0.16.3...
5 years ago

Flux v0.11.0

Diff since v0.10.4

Closed issues:

  • Support for asymmetric padding (#258)
  • Support for Kaiming Initialization (#424)
  • trained recurrent model can't be saved in BSON (#531)
  • saving ADAM optimizer is broken [@save] [BSON] (#737)
  • BatchNorm gradients return Float64 instead of Float32 (#757)
  • ERROR: UndefVarError: derivative not defined (#768)
  • "Same" padding for conv layers? (#813)
  • Strange bug with Adjoint (#866)
  • Convolution without bias (#868)
  • REST API for real-time prediction (#911)
  • Zygote errors building bidirectional RNN (#962)
  • Batch aware binarycrossentropy and logitbinarycrossentropy (#1024)
  • Ways to freeze some part of a functor during training (#1034)
  • dropout function is implemented as just an identity (#1084)
  • revisit DataLoader api (#1088)
  • Dead link in documentation (#1097)
  • Orthogonal Initialization for RNN (#1107)
  • no method matching apply! (#1111)
  • DOC. typo in section of DataLoader (#1112)
  • InitError: could not load library "cudnn64_7.dll" (#1116)
  • How to downloading only one artifact of CUDA (#1117)
  • gpu function does not fully work on structs within structs (#1118)
  • SGD exported but not defined (#1121)
  • outdim not defined&dont know how to update flux from 0.90 to 0.10 (#1154)
  • Simple regularisation fails for Flux 0.10.4 (#1157)
  • DataLoader type instability (#1159)
  • Remove Manifest from master (#1164)
  • LSTM cannot be trained successfully with the latest release version (#1168)
  • BatchNorm failed on GPU (#1172)
  • ExpDecay does not decay according to the description (#1176)
  • Repeating crashes of NVIDIA GPU/CUDA drivers while training on basic model zoo (#1183)
  • Can't use Flux (#1193)
  • Gradient Does not work on parameterized Variable (#1196)
  • Wrong MaxPool gradient? (#1197)
  • Apply boolean mask in loss function (#1198)
  • Passing Number of hidden units as a float has unexpected behaviour (#1199)
  • Error in displying example for Flux.Dense (#1203)
  • Error running Flux on Jupyter (#1205)
  • MethodError: no method matching apply! in custom loss function (#1210)
  • Setting input or output layer size to a float in the Dense constructor should error (#1217)
  • MethodError: no method matching apply!(::Type{ADAM}, ::Array{Float64,2}, ::Array{Float64,2}) for simple example (#1219)
  • Incorrect gradients LSTM (#1222)
  • Create additional pooling layers (#1224)
  • ANN Forecasting with Flux (#1225)
  • Neural Networks for Image Segmentation (#1228)
  • Got an error while training on GPU with Mish activation function (#1235)
  • Gradient for BatchNorm no longer works (#1244)
  • how to restrain each element of weights to be nonnegative? (#1250)
  • Retrieving weights (#1251)
  • Adding regularisation causes NaNs on first Epoch (#1254)
  • ERROR: Can't differentiate foreigncall expression (#1257)
  • Get wrong third order derivative of Morse potential (#1267)
  • ERROR: LoadError: Need an adjoint for constructor EnsembleSolution (#1270)

Merged pull requests:

Don't miss a new Flux.jl release

NewReleases is sending notifications on new releases.