Flux v0.13.0
Changes in NEWS.md
Closed issues:
- DepthwiseConv does not run on GPU (#459)
- Flux type piracy breaks REPL completions (#629)
- Cannot do double iteration of DataLoader (#1227)
- elu activation fails on nested pullbacks on GPU (#1383)
- Training not working for 1D types (#1479)
- adjoint of conv adjoint. (#1665)
pullback'sbackreturns unexpected size if some parameters are not used (#1601)- Allow specification of RNG in Dropout (#1617)
- deprecate DepthwiseConv once we have groups in standard conv (#1667)
Paralleledge-cases (#1685)- Layer printing interferes with different element types (#1690)
- Normalization Layers not interating well with destructure/restructure (#1727)
- missing docstring for
Flux.paramsandtrainable(#1732) - inconsistency between params and destructure (#1733)
- Parameter Sharing breaks
destructure(#1767) - Remove Juno.jl dependency (#1779)
Flux.destructure's restructure fails in the gradient if loss does not use all parameters (#1826)Flux.chunkfor multi-dimensional arrays (#1841)- onehotbatch performance (#1844)
- Issue taking gradients of Chains on GPU (#1853)
Chainforgets names underfmap(#1857)- Recurrent 3d interface uses a lot of memory (#1872)
- Gradient incorrect for Conv-layer and complex numbers (#1876)
- Add Siamese Contrastive Loss function (#1880)
- Urgent GSoC revisions are needed. (#1890)
- Flux v0.12.9 and the Flux.Tracker.gradient is wrong, why? (#1898)
- LoadError UnderVarError: flatten not defined (#1899)
- Proposal: Move
paramsto Zygote (#1900) - This one is not in use, which one should I use instead in Flux? (#1903)
- ERROR: LoadError: Can't differentiate foreigncall expression (#1904)
- Missing docstring for
Flux.Data.Dataloader(#1909) - Different
Juliaversions at different places for doctests (#1914) Parallellayer behaves diffferently in aChainthan on its own (#1919)- ADAMW not stable (#1920)
- Chain ignores Base.show function of custom layer (#1929)
Merged pull requests:
- v0.13 deprecations (#1751) (@CarloLucibello)
- Print channel dimensions of
Denselike those ofConv(#1658) (@mcabbott) - Replace unrolled
foldlused to evaluateChainwith a better one (#1809) (@mcabbott) - Zero is a real number (
Flux.Nil) (#1830) (@mcabbott) - Use faster activation functions (#1837) (@mcabbott)
- Add RNG support for Dropout/AlphaDropout (#1849) (@darsnack)
- Fix CI to run on LTS + latest + nightly (#1852) (@darsnack)
- Fix type-stability for normalization layers (#1856) (@pxl-th)
- Use ProgressLogging instead of Juno (#1859) (@darsnack)
- Speed up
onehotbatch(#1861) (@mcabbott) - Simplify
trainable,functorandParallel(#1862) (@mcabbott) - Replace
@adjointwithrrule(#1863) (@mcabbott) - Depend on Optimisers.jl (#1864) (@mcabbott)
- rationalize CI (#1865) (@CarloLucibello)
- Updated Dropout for more input types. (#1867) (@ShoofLLC)
- fix adamw (#1868) (@CarloLucibello)
- Add OperatorLearning.jl to Flux downstream tests (#1869) (@ChrisRackauckas)
- Mark dropout_mask as non-differentiable (#1870) (@ToucheSir)
- Recurrent benchmarks (#1871) (@mkschleg)
- Changed view to eachslice for folding in recurrent (#1873) (@mkschleg)
- use MLUtils (#1874) (@CarloLucibello)
- Add a structural
loadparams!(#1875) (@darsnack) - Truncated normal initialisation for weights (#1877) (@theabhirath)
- Extending
Diagonal(#1881) (@theabhirath) - rm Flux.Zeros (#1882) (@mcabbott)
- CompatHelper: add new compat entry for SpecialFunctions at version 2, (keep existing compat) (#1883) (@github-actions[bot])
- Make RNN layers accept
in => out(#1886) (@mcabbott) - Speeding up onehotbatch by creating OneHotArray directly (#1888) (@TLipede)
- CompatHelper: bump compat for MLUtils to 0.2, (keep existing compat) (#1889) (@github-actions[bot])
- Addition of Siamese Contrastive Loss function ( Updated ) (#1892) (@arcAman07)
- Buildkite: don't persist registry across runs (#1893) (@ToucheSir)
- Use
destructurefrom Optimisers.jl (#1901) (@mcabbott) - RFC: Restrict
train!toAbstractOptimiser(#1902) (@mcabbott) - Add
dimskeywords to some tests (#1906) (@mcabbott) - Mark initialisations nograd, restrict signatures (#1908) (@mcabbott)
- Add
MLUtils's docs and fix some missing docstrings (#1910) (@Saransh-cpp) - Improvements for LayerNorm (#1911) (@theabhirath)
- Improve docs for initialisation (#1912) (@mcabbott)
- Turn off doctests while building docs (#1915) (@Saransh-cpp)
- dampening -> damping (#1918) (@alhirzel)
- remove DepthwiseConv type in favor of Conv (#1921) (@CarloLucibello)
- Allow activation function for Diagonal (#1925) (@theabhirath)
- Upgrade warnings for v0.13 (#1926) (@mcabbott)
- Rename
DiagonaltoScale(#1927) (@mcabbott) - Fix a code block (#1933) (@prbzrg)