github FluxML/Flux.jl v0.16.7

2 days ago

Flux v0.16.7

Diff since v0.16.6

Merged pull requests:

Closed issues:

  • Docker images for Floydhub and similar (#148)
  • Implement einsum function/macro à la PyTorch and TF (#297)
  • Flux and Images (#326)
  • "Tracing" memory pre-allocator (#349)
  • make Juno dependency conditional (#454)
  • Encoding array dimensions in flux type system? (#614)
  • Gradient Interface Design (#628)
  • New New Optimisers (#637)
  • Clipping (#672)
  • CUDA Programming Model (#706)
  • LBFGS Optimizer (#719)
  • Flux plots (#729)
  • "ADAM" and friends should be called "Adam" (#795)
  • Add lookahead optimizer (#838)
  • ADAM does not accept keyword arguments (#871)
  • Compatibility with Tracker (#883)
  • Numerical issues for (logit)binarycrossentropy (#914)
  • Change abstract argument names to meaningful ASCII (#915)
  • Roadmap to Flux 1.0 (#961)
  • Zygote gives extra gradient entries for BatchNorm (#1018)
  • Helper methods for extracting RNN final state in a GPU compatible way (#1043)
  • helper function for selecting a gpu in multi-gpu setting (#1074)
  • Provide iper-simple examples directly in readme.md (#1115)
  • gpu function does nothing, but only on first run (#1119)
  • Behavior of chunk (#1120)
  • ArrayFire (#1126)
  • MethodError: no method matching zero(::Type{Array{Float32,2}}) In Flux Loss function (#1134)
  • Parameter collection and GPU movement fail on models defined via functions (#1201)
  • Derivative in loss function error (#1464)
  • Document OneHotArray (#1519)
  • Second order derivative (#1582)
  • Conv is not working for Complex when using CUDA (#1655)
  • Flux installation errors in julia 1.7.0-rc1, WSL2 (#1757)
  • Two-arg update!(x, d) is never used (#1860)
  • cpu() type stability (#1878)

Don't miss a new Flux.jl release

NewReleases is sending notifications on new releases.