github intel/intel-extension-for-pytorch v1.13.10+xpu
Intel® Extension for PyTorch* v1.13.10+xpu Release Notes

latest releases: v2.5.0+cpu, v2.3.110+xpu, v2.4.0+cpu...
22 months ago

1.13.10+xpu

We are pleased to announce the release of Intel® Extension for PyTorch* 1.13.10+xpu, which is the first Intel® Extension for PyTorch* release supports both CPU platforms and GPU platforms (Intel® Data Center GPU Flex Series and Intel® Data Center GPU Max Series) based on PyTorch* 1.13. It extends PyTorch* 1.13 with up-to-date features and optimizations on xpu for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*.

Highlights

This release introduces specific XPU solution optimizations on Intel discrete GPUs which include Intel® Data Center GPU Flex Series and Intel® Data Center GPU Max Series. Optimized operators and kernels are implemented and registered through PyTorch* dispatching mechanism for the xpu device. These operators and kernels are accelerated on Intel GPU hardware from the corresponding native vectorization and matrix calculation features. In graph mode, additional operator fusions are supported to reduce operator/kernel invocation overheads, and thus increase performance.

This release provides the following features:

  • Distributed Training on GPU:
    • support of distributed training with DistributedDataParallel (DDP) on Intel GPU hardware
    • support of distributed training with Horovod (experimental feature) on Intel GPU hardware
  • Automatic channels last format conversion on GPU:
    • Automatic channels last format conversion is enabled. Models using torch.xpu.optimize API running on Intel® Data Center GPU Max Series will be converted to channels last memory format, while models running on Intel® Data Center GPU Flex Series will choose oneDNN block format.
  • CPU support is merged in this release:
    • CPU features and optimizations are equivalent to what has been released in Intel® Extension for PyTorch* v1.13.0+cpu release that was made publicly available in Nov 2022. For customers who would like to evaluate workloads on both GPU and CPU, they can use this package. For customers who are focusing on CPU only, we still recommend them to use Intel® Extension for PyTorch* v1.13.0+cpu release for smaller footprint, less dependencies and broader OS support.

This release adds the following fusion patterns in PyTorch* JIT mode for Intel GPU:

  • Conv2D + UnaryOp(abs, sqrt, square, exp, log, round, GeLU, Log_Sigmoid, Hardswish, Mish, HardSigmoid, Tanh, Pow, ELU, hardtanh)
  • Linear + UnaryOp(abs, sqrt, square, exp, log, round, Log_Sigmoid, Hardswish, HardSigmoid, Pow, ELU, SiLU, hardtanh, Leaky_relu)

Known Issues

Please refer to Known Issues webpage.

Download wheel packages

Take wget as examples:

  • Compatible with Python 3.10:
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/intel_extension_for_pytorch-1.13.10%2Bxpu-cp310-cp310-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/torch_ccl/xpu/oneccl_bind_pt-1.13.100%2Bgpu-cp310-cp310-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/torch-1.13.0a0%2Bgitb1dde16-cp310-cp310-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/torchvision-0.14.1a0%2B0504df5-cp310-cp310-linux_x86_64.whl
  • Compatible with Python 3.9:
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/intel_extension_for_pytorch-1.13.10%2Bxpu-cp39-cp39-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/torch_ccl/xpu/oneccl_bind_pt-1.13.100%2Bgpu-cp39-cp39-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/torch-1.13.0a0%2Bgitb1dde16-cp39-cp39-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/torchvision-0.14.1a0%2B0504df5-cp39-cp39-linux_x86_64.whl
  • Compatible with Python 3.8:
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/intel_extension_for_pytorch-1.13.10%2Bxpu-cp38-cp38-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/torch_ccl/xpu/oneccl_bind_pt-1.13.100%2Bgpu-cp38-cp38-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/torch-1.13.0a0%2Bgitb1dde16-cp38-cp38-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/torchvision-0.14.1a0%2B0504df5-cp38-cp38-linux_x86_64.whl
  • Compatible with Python 3.7:
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/intel_extension_for_pytorch-1.13.10%2Bxpu-cp37-cp37m-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/torch_ccl/xpu/oneccl_bind_pt-1.13.100%2Bgpu-cp37-cp37m-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/torch-1.13.0a0%2Bgitb1dde16-cp37-cp37m-linux_x86_64.whl
wget https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_stable/xpu/torchvision-0.14.1a0%2B0504df5-cp37-cp37m-linux_x86_64.whl

Don't miss a new intel-extension-for-pytorch release

NewReleases is sending notifications on new releases.