Flux v0.11.2
Closed issues:
- Error with Flux.crossentropy (#435)
- Unnecessary typeasserts in Flux.Optimise.apply! cause training to fail (#816)
- OneHotMatrix causes a 'scalar getindex disallowed' error on GPU (#1006)
- Higher order derivative products? (#1102)
- Gradient of Chain with respect to input on gpu (#1132)
- Backprop through time is truncated to only 1 time step (#1209)
- Failed to load Flux 1.11.0 and 1.11.1 with Julia 1.4.2 and 1.5.0 on a windows machine (#1313)
- ADAMW Optimise has no field eta (#1316)
- LayerNorm only operates on 2D tensors (also Diagonal) (#1321)
- NNlib not defined error when loading model saved with BSON (#1322)
- Map and broadcast on LSTM layers give different gradients (#1324)
- zygote (#1327)
- Error while pre-compIling Flux in Julia v1.4.2 on windows 10 (#1328)
- DepthwiseConv gives incorrect channel sizes when initialized from array (#1331)
- Flux.params return extra parameter (#1348)
- XOR Error not converging to 0 (#1352)
- Broken methods(Base.show) (#1354)
- Applying Dense layer on OneHotMatrix is very slow and can be optimized. (#1356)
- Unable to obtain gradient after flattened pooling layer. (#1359)
- "incremental compilation may be fatally broken for this module" when using Flux (#1370)
Merged pull requests:
- add Flux.skip() (#1232) (@Moelf)
- Add ColPrac badge (#1317) (@oxinabox)
- Change ConvTranspose with SamePad to have outsize = stride * insize (#1320) (@DrChainsaw)
- change nadam cite (#1333) (@JeffFessler)
- params([W, b]) to params(W, b) (#1334) (@paulxshen)
- export OADAM (#1336) (@cossio)
- update for Cuda 2 (#1345) (@CarloLucibello)
- Fix BPTT by overriding stateful broadcast adjoint (#1358) (@DhairyaLGandhi)
- Implement AdaBelief (#1362) (@willtebbutt)
- Update functions.jl (#1366) (@okaerin)
- Fixes #1354 (#1368) (@racinmat)
- Trailing spaces (#1369) (@racinmat)
- Update Slack URL (#1373) (@logankilpatrick)