What's Changed
- Bump burn version 0.21 by @laggui in #4333
- Use NodeType to point to unimplemented node by @laggui in #4334
- burn-train: include GPU power draw in CudaMetric by @StanByriukov02 in #4322
- Fix book guide training changes by @laggui in #4340
- Combined PRs by @github-actions[bot] in #4352
- ensure that tensor is owned on iter_dim call by @tzemanovic in #4309
- docs: add DataframeDataset example using Polars by @SameerVers3 in #4298
- Add evaluation name
as_str+ display by @laggui in #4354 - Fix memory growth: use GraphLocator::remove_entry for orphan cleanup by @jnamika in #4342
- Bump ratatui from 0.29.0 to 0.30.0 by @dependabot[bot] in #4305
- Performance tweaks to the lp_norm code. by @crutcher in #4318
- Add
Scalarruntime literal by @laggui in #4337 - Add compile errors for module derive by @laggui in #4356
- Make
ElementComparisonoptional for dtypes by @skewballfox in #4255 - fix: Actually implement conv backwards ops for
burn-fusion/burn-routerby @wingertge in #4360 - Update for cubecl
try_cast_unchecked->downcastrename by @adolago in #4335 - fix: Fix interpolate with NHWC input by @wingertge in #4363
- Move ONNX import to
burn-onnxcrate by @laggui in #4361 - Update cubek by @laggui in #4365
- Implement Mean(L(P) Norm Error)Loss by @softmaximalist in #4341
- Fix clippy rust 1.93 by @laggui in #4371
- Use
cache_dir()instead of hardcoded~/.cachepath by @antimora in #4372 - Combined PRs by @github-actions[bot] in #4386
- Fix typo in dataset.md in Burn Book by @softmaximalist in #4380
- chore: Enable macos CI by @dcvz in #4389
- add AMSgrad support for Adam/AdamW by @donjuanplatinum in #4388
- Implement the PSNR vision metric by @softmaximalist in #4379
- Update cubecl wgpu v28 by @laggui in #4244
- Bump tracel-ai/github-actions from 6 to 7 by @dependabot[bot] in #4394
- Bump tracel-ai/github-actions/.github/workflows/publish-crate.yml from 6 to 7 by @dependabot[bot] in #4395
- chore: enable metal backend tests on ci by @dcvz in #4390
- Feat/device policy by @laggui in #4373
- More explicit global dtype support by @laggui in #4400
- Move ONNX crates to burn-onnx repository by @antimora in #4393
- opt(burn-cubecl): Optimized tensors by default by @wingertge in #4402
- chore: fix typos caught by xtask by @huahuadeliaoliao in #4406
- Add field docs to generated methods by @swfsql in #4408
- Make transformer layer APIs public for cross-crate usage by @antimora in #4409
- Implement SSIM vision metric by @softmaximalist in #4396
- Combined PRs by @github-actions[bot] in #4425
- move sort functions to orderable trait by @skewballfox in #4419
- [BREAKING] Add asymmetric padding support for conv and pool operations by @antimora in #4263
- Update Burn Book: metrics and trig functions by @softmaximalist in #4413
- Add device dtype usage by @laggui in #4404
- add KLDivLoss and batch_mean in reduction by @donjuanplatinum in #4399
- feat(burn-store): add ModuleAdapter chaining by @huahuadeliaoliao in #4407
- Fix cubek matmul stage size by @laggui in #4435
- Bump tracel-ai/github-actions/.github/workflows/publish-crate.yml from 7 to 8 by @dependabot[bot] in #4443
- chore: deprecate burn-candle backend by @antimora in #4416
- Add configurable activation and layer_norm_eps to transformer layers by @antimora in #4410
- Add Softsign activation function by @antimora in #4437
- chore: update workflows by @syl20bnr in #4446
- Add ThresholdedRelu activation function by @antimora in #4440
- Combined PRs by @github-actions[bot] in #4453
- Add check for wasm-bindgen installation by @zhoukekestar in #4358
- Add BiGru (bidirectional GRU) module by @antimora in #4442
- Fix:
SupervisedTrainingshould use the model device by default by @laggui in #4456 - Add Elu activation function by @antimora in #4438
- chore: update workflows to use Tracel GitHub actions v9 by @syl20bnr in #4457
- Add CELU activation function by @antimora in #4441
- Add Selu activation function by @antimora in #4439
- Burn rl by @Charles23R in #4447
- Perf/fusion/reduce broadcasted by @nathanielsimard in #4338
- Implement median tensor operation by @softmaximalist in #4454
- Add deg2rad and rad2deg by @softmaximalist in #4462
- fix: use all dilation entries in
max_pool2d_with_indices_backwardby @fcasal in #4466 - Update zip + time by @laggui in #4468
- Implement basic RNN module by @aditya0by0 in #4460
- fix: default to single device strat when only 1 device by @Charles23R in #4463
- Combined PRs by @github-actions[bot] in #4485
- Add
module.train()to move a module back to the autodiff backend by @laggui in #3975 - chore: Update cubecl to runtime config refactor by @wingertge in #4489
- Feature flag + Tests for RL in burn-rl and burn-train by @Charles23R in #4470
- Fix reduce line size parallel and mean accumulator precision by @laggui in #4467
- Chore: Pre-Release 0.21.0-pre.1 by @nathanielsimard in #4494
- Fix pre-release by @nathanielsimard in #4495