What's Changed
- Fix: create multiple elemwise fused block by @nathanielsimard in #4497
- Upgrade to rand 0.10 by @laggui in #4500
- fix overflow in int_abs_elem for i64 min value by @Olexandr88 in #4486
- Implements: LPIPS matrics for Image quality by @koreaygj in #4403
- Fix quantization non-contiguous input by @laggui in #4498
- Add SequenceOutput struct for sequence prediction outputs by @softmaximalist in #4474
- feat: enhance attention() with scale, attn_bias, softcap, and is_causal by @antimora in #4476
- Fix too many kernels by @nathanielsimard in #4505
- feat: Enable 64-bit indexing for kernels by @wingertge in #4502
- feat: support padding on arbitrary dimensions by @antimora in #4507
- allow flash attention with causal by @louisfd in #4509
- Remove getrandom w/ wasm_js backend by @laggui in #4515
- Bump polars to 0.53.0 by @laggui in #4514
- perf: Make backing storage of
Shapemore flexible by @wingertge in #4516 - Combined PRs by @github-actions[bot] in #4528
- feat: add align_corners support to InterpolateOptions by @antimora in #4518
- fix: OptimSharded strategy validation device mismatch by @Dreaming-Codes in #4527
- Add native sign unary ops for CubeCL float and int by @yash27-lab in #4513
- Bump zip to 8.1.0 by @laggui in #4533
- Fix image-classification-web links by @laggui in #4536
- Fix zip yanked downstream dep by @laggui in #4540
- add LBFGS optimizer by @donjuanplatinum in #4471
- Replace Vec-based TransitionBuffer with tensor-backed storage by @arferreira in #4504
- Implement CTC loss by @softmaximalist in #4529
- refactor: Metadata type/strides refactor by @wingertge in #4534
- Attention: remove default impl and implement for all backends by @louisfd in #4544
- fix: resolve macOS build and test failures by @antimora in #4545
- fix: Bool from_data_dtype panics on GPU backends by @antimora in #4551
- Attention autotune by @louisfd in #4552
- Attention: add autotune gate by @louisfd in #4554
- Combined PRs by @github-actions[bot] in #4565
- Optional Ordering for NdArrayElement by @skewballfox in #4559
- Add Smooth L1 loss by @softmaximalist in #4547
- Implement HardShrink, SoftShrink and Shrink Activations by @aditya0by0 in #4556
- doc(notebook) : add more basic operations and some examples by @Tyooughtul in #4542
- Update cubecl/cubek revs by @laggui in #4568
- Fix(lpips): load ImageNet backbone weights for pretrained models by @koreaygj in #4557
- [Feat] Global backend
Dispatchby @laggui in #4508 - fix(burn-candle): move wildcard match arm to end of dtype match by @holg in #4571
- move sign back to mathOps by @skewballfox in #4573
- refactor: Move from
CubeOptiontoOptionby @wingertge in #4543 - update attention cubek autotune by @louisfd in #4579
- Add evaluator summary by @laggui in #4578
- Move
burn-nnmodule name checks inburn-storeadapter to the test section by @softmaximalist in #4580 - Expose
BurnpackErrorby @AdrianEddy in #4585 - Combined PRs by @github-actions[bot] in #4588
- Bump versions by @nathanielsimard in #4589
- Add burn-dispatch publish by @laggui in #4590
Full Changelog: v0.21.0-pre.1...v0.21.0-pre.2