Flux v0.11.0
Closed issues:
- Support for asymmetric padding (#258)
- Support for Kaiming Initialization (#424)
- trained recurrent model can't be saved in BSON (#531)
- saving ADAM optimizer is broken [@save] [BSON] (#737)
- BatchNorm gradients return Float64 instead of Float32 (#757)
- ERROR: UndefVarError: derivative not defined (#768)
- "Same" padding for conv layers? (#813)
- Strange bug with Adjoint (#866)
- Convolution without bias (#868)
- REST API for real-time prediction (#911)
- Zygote errors building bidirectional RNN (#962)
- Batch aware binarycrossentropy and logitbinarycrossentropy (#1024)
- Ways to freeze some part of a functor during training (#1034)
- dropout function is implemented as just an identity (#1084)
- revisit DataLoader api (#1088)
- Dead link in documentation (#1097)
- Orthogonal Initialization for RNN (#1107)
- no method matching apply! (#1111)
- DOC. typo in section of DataLoader (#1112)
- InitError: could not load library "cudnn64_7.dll" (#1116)
- How to downloading only one artifact of CUDA (#1117)
- gpu function does not fully work on structs within structs (#1118)
- SGD exported but not defined (#1121)
- outdim not defined&dont know how to update flux from 0.90 to 0.10 (#1154)
- Simple regularisation fails for Flux 0.10.4 (#1157)
- DataLoader type instability (#1159)
- Remove Manifest from master (#1164)
- LSTM cannot be trained successfully with the latest release version (#1168)
- BatchNorm failed on GPU (#1172)
- ExpDecay does not decay according to the description (#1176)
- Repeating crashes of NVIDIA GPU/CUDA drivers while training on basic model zoo (#1183)
- Can't use Flux (#1193)
- Gradient Does not work on parameterized Variable (#1196)
- Wrong MaxPool gradient? (#1197)
- Apply boolean mask in loss function (#1198)
- Passing Number of hidden units as a float has unexpected behaviour (#1199)
- Error in displying example for Flux.Dense (#1203)
- Error running Flux on Jupyter (#1205)
- MethodError: no method matching apply! in custom loss function (#1210)
- Setting input or output layer size to a float in the Dense constructor should error (#1217)
- MethodError: no method matching apply!(::Type{ADAM}, ::Array{Float64,2}, ::Array{Float64,2}) for simple example (#1219)
- Incorrect gradients LSTM (#1222)
- Create additional pooling layers (#1224)
- ANN Forecasting with Flux (#1225)
- Neural Networks for Image Segmentation (#1228)
- Got an error while training on GPU with Mish activation function (#1235)
- Gradient for BatchNorm no longer works (#1244)
- how to restrain each element of weights to be nonnegative? (#1250)
- Retrieving weights (#1251)
- Adding regularisation causes NaNs on first Epoch (#1254)
- ERROR: Can't differentiate foreigncall expression (#1257)
- Get wrong third order derivative of Morse potential (#1267)
- ERROR: LoadError: Need an adjoint for constructor EnsembleSolution (#1270)
Merged pull requests:
- Fix for onecold broadcast bug (#764) (@DhairyaLGandhi)
- Make bias optional (#873) (@DhairyaLGandhi)
- Add option for "Same" padding to conv and pooling layers (#901) (@DrChainsaw)
- Add some gradient checking tests on GPUs (#957) (@DhairyaLGandhi)
- docstring for pad, stride, dilation (#1093) (@saswatpp)
- Explicitly import
Flux.Optimiser.apply!
in optimiser docs (#1113) (@SebastianCallh) - Fix doc indent (#1123) (@matsueushi)
- Removed deprecated SGD exports (#1127) (@bhvieira)
- Added dropgrad in huber_loss (#1129) (@HenriDeh)
- Update glorot_normal doc (#1131) (@AdarshKumar712)
- add ClipValue and ClipNorm (#1133) (@AStupidBear)
- Add functor Cholesky. (#1138) (@aterenin)
- Speedup matmul of CuMatrix and OneHotMatrix (#1141) (@AStupidBear)
- Cleaner training loop (#1149) (@DhairyaLGandhi)
- generalize and homogenize losses (#1150) (@CarloLucibello)
- extend dataloader (#1152) (@CarloLucibello)
- Add correct overload for apply! in docs (#1156) (@DhairyaLGandhi)
- Build docs on Julia 1.3 (#1160) (@DhairyaLGandhi)
- Update CompatHelper.yml (#1162) (@aminya)
- Fix docstring of logitcrossentropy (#1165) (@cossio)
- Fix crossentropy when some probabilities are zero (#1166) (@cossio)
- Update basics.md (#1167) (@mipals)
- Functors (#1174) (@MikeInnes)
- xlogy broadcast adjoint (#1175) (@MikeInnes)
- Align ExpDecay implementation with documentation (#1177) (@DrChainsaw)
- CompatHelper: add new compat entry for "Functors" at version "0.1" (#1179) (@github-actions[bot])
- Add some functions to docs (#1184) (@DhairyaLGandhi)
- Add some news (#1185) (@DhairyaLGandhi)
- LayerNorm regularization (#1187) (@sdobber)
- Correcting advanced.md (#1190) (@Sleort)
- Pull Request Template (#1191) (@MikeInnes)
- Improve
restructure
performance (#1192) (@MikeInnes) - Fixing ambiguous remark in Preserve inputs' types (#1206) (@natema)
- Fixing typo in docs (#1207) (@natema)
- Fixing output format for
onehot
(#1208) (@natema) - Fixing syntax in onehot docstring (#1211) (@natema)
- Fixing indentation in train! docstring (#1213) (@natema)
- Require weight and bias to be AbstractArrays (#1218) (@oxinabox)
- CompatHelper: bump compat for "Adapt" to "2.0" (#1220) (@github-actions[bot])
- DataLoader with NamedTuple (#1221) (@cossio)
- use
ntuple
in conv (#1231) (@MikeInnes) - Fix jldoctest for Flux.Dense (#1236) (@lassepe)
- Fix inline code block (#1238) (@harryscholes)
- add adaptive pool (#1239) (@dnabanita7)
- Documentation: Move logging example outside gradient block (#1240) (@contradict)
- add kaiming initialization and relevant docstrings (#1243) (@johnnychen94)
- Optimistic ADAM (#1246) (@cossio)
- outdims: revise implementation for Chain, dimension check for Dense (#1252) (@hhaensel)
- move to CUDA.jl (#1258) (@CarloLucibello)
- improve regularisation docs (#1260) (@CarloLucibello)
- dropout function always active (#1263) (@CarloLucibello)
- create Losses module (#1264) (@CarloLucibello)
- fix a link typo in NEWS (#1265) (@johnnychen94)