pypi fastai 2.7.0
v2.7.0

latest releases: 2.7.18, 2.7.17, 2.7.16...
2 years ago

Breaking changes

  • Distributed training now uses Hugging Face Accelerate, rather than fastai's launcher.
    Distributed training is now supported in a notebook -- see this tutorial for details

New Features

  • resize_images creates folder structure at dest when recurse=True (#3692)
  • Integrate nested callable and getcallable (#3691), thanks to @muellerzr
  • workaround pytorch subclass performance bug (#3682)
  • Torch 1.12.0 compatibility (#3659), thanks to @josiahls
  • Integrate Accelerate into fastai (#3646), thanks to @muellerzr
  • New Callback event, before and after backward (#3644), thanks to @muellerzr
  • Let optimizer use built torch opt (#3642), thanks to @muellerzr
  • Support PyTorch Dataloaders with DistributedDL (#3637), thanks to @tmabraham
  • Add channels_last cb (#3634), thanks to @tcapelle
  • support all timm kwargs (#3631)
  • send self.loss_func to device if it is an instance on nn.Module (#3395), thanks to @arampacha

Bugs Squashed

  • Solve hanging load_model and let LRFind be ran in a distributed setup (#3689), thanks to @muellerzr
  • pytorch subclass functions fail if no positional args (#3687)
  • Workaround for performance bug in PyTorch with subclassed tensors (#3683), thanks to @warner-benjamin
  • Fix Tokenizer.get_lengths (#3667), thanks to @karotchykau
  • load_learner with cpu=False doesn't respect the current cuda device if model exported on another; fixes #3656 (#3657), thanks to @ohmeow
  • [Bugfix] Fix smoothloss on distributed (#3643), thanks to @muellerzr
  • WandbCallback Error: "Tensors must be CUDA and dense" on distributed training (#3291)
  • vision tutorial failed at learner.fine_tune(1) (#3283)

Don't miss a new fastai release

NewReleases is sending notifications on new releases.