Flux v0.10.2
Closed issues:
- Training pipeline inappropriate for large datasets (#278)
- Iterators for batches and epochs (#317)
- Implicit to Explicit Parameterization of Flux Models (#742)
- crossentropy is broken with CUDA due to log (#889)
- MethodError: no method matching CuArrays.CuArray{Float32,N} where N(::Float32) (#908)
- Limitation of
Flux.istraining()
(#909) - Error with regularization using norm() and Zygote (#930)
- Zygote error on moving array to GPU (#947)
- update! not working (#951)
- Gradients of Chain including leakyrelu function (#963)
- Is there a way to have layers (esp. a Conv) without biases? (#966)
- Zygote error (#967)
- Why run MNIST example is vary slow? (#968)
- model-zoo Cifar10.jl is generating "Loss is NaN" (#970)
- Handling imbalanced data (#972)
- BatchNorm is broken (#976)
- Some activation functions change type when backpropagating and pooling layers doesn't like it (#979)
- Conv layers with CPU backend randomly mixes up batch dimensions (#982)
- destructure/restructure is doing scalar indexing on GPU in back pass (#989)
- Flux pins down Colors (#995)
- Suggestion: Bounds for stochastic gradient descent loss fluctuations (#1000)
- How to keep weights of parts of a model fixed under Flux.train! (#1001)
- Support Colors.jl v0.10 and v0.11 (#1002)
- Typo in Flux home page description? Gougle? (#1004)
- Taking the package description (not too) seriously (#1007)
le_float
not differentiable: implementing reverse huber loss (#1011)- Do you support or have any materials about optimizing with nonlinear constraints? (#1014)
- Loopinfo expression error with onecold (#1020)
- Type Promotion often Unwieldy and day Ruining (#1026)
- LoadError: MethodError: no method matching softmax(::Float32; dims=1) (#1029)
- Flux compat with Juno? (#1036)
- Failed to precompile Flux (#1045)
- train!() hasn't been export in Flux.jl (#1048)
- error with Conv (#1055)
- The most basic Conv layer fails to compute gradients (#1060)
Merged pull requests:
- Added new loss functions. (#680) (@thebhatman)
- Added utility function outdims to compute output dimensions of a layer (#960) (@darsnack)
- Adding CompatHelper (#984) (@aminya)
- Add custom training loops to docs (#994) (@oxinabox)
- test restructure on the GPU (#998) (@ChrisRackauckas)
- Remove unused imports. (#1008) (@maleadt)
- Adapt to GPUArrays/CuArrays changes (#1013) (@maleadt)
- nograd for onecold, onehot, onehotbatch (#1021) (@CarloLucibello)
- Feature: Added Boston Housing Dataset (#1023) (@pranjaldatta)
- Install TagBot as a GitHub Action (#1030) (@JuliaTagBot)
- Remove outdated reference to truncate! (#1032) (@mcognetta)
- Remove get! macro (#1035) (@matsueushi)
- update compat to Juno 0.8 (#1037) (@heliosdrm)
- add NNlib docs + misc docs improvements (#1041) (@CarloLucibello)
- Add testmode! back for normalization layers (#1044) (@darsnack)
- Bump Colors compat to include 0.10, 0.11 (#1046) (@ianshmean)
- Edit description of convolutional layer (#1047) (@MotJuMi)
- add DataLoader (#1051) (@CarloLucibello)
- update docs and export update! (#1052) (@CarloLucibello)
- add Julia ecosystem doc section (#1057) (@CarloLucibello)
- fix a few typos in docstrings (#1061) (@visr)
- docstring ensure signature code formatting (#1062) (@visr)
- Include cuda/cuda.jl during precompilation? (#1064) (@ianshmean)
- fix travis for documentation build (#1066) (@johnnychen94)