github intel/intel-extension-for-pytorch v2.1.20+xpu
Intel® Extension for PyTorch* v2.1.20+xpu Release Notes

latest releases: v2.5.0+cpu, v2.3.110+xpu, v2.4.0+cpu...
7 months ago

2.1.20+xpu

We are excited to announce the release of Intel® Extension for PyTorch* v2.1.20+xpu. This is a minor release which supports Intel® GPU platforms (Intel® Data Center GPU Flex Series, Intel® Data Center GPU Max Series and Intel® Arc™ A-Series Graphics) based on PyTorch* 2.1.0.

Highlights

  • Intel® oneAPI Base Toolkit 2024.1 compatibility
  • Intel® oneDNN v3.4 integration
  • LLM inference scaling optimization based on Intel® oneCCL 2021.12 (Prototype)
  • Bug fixing and other optimization
    • Uplift XeTLA to v0.3.4.1 #3696
    • [SDP] Fallback unsupported bias size to native impl #3706
    • Error handling enhancement #3788, #3841
    • Fix beam search accuracy issue in workgroup reduce #3796
    • Support int32 index tensor in index operator #3808
    • Add deepspeed in LLM dockerfile #3829
    • Fix batch norm accuracy issue #3882
    • Prebuilt wheel dockerfile update #3887, #3970
    • Fix windows build failure with Intel® oneMKL 2024.1 in torch_patches #18
    • Fix FFT core dump issue with Intel® oneMKL 2024.1 in torch_patches #20, #21

Known Issues

Please refer to Known Issues webpage.

Don't miss a new intel-extension-for-pytorch release

NewReleases is sending notifications on new releases.