github dmlc/dgl 1.0.0
v1.0.0

latest releases: v2.4.0, v2.3.0, v2.2.1...
21 months ago

v1.0.0 release is a new milestone for DGL. 🎉🎉🎉

New Package: dgl.sparse

In this release, we introduced a brand new package: dgl.sparse, which allows DGL users to build GNNs in Sparse Matrix paradigm. We provided Google Colab tutorials on dgl.sparse package from getting started on sparse APIs to building different types of GNN models including Graph Diffusion, Hypergraph and Graph Transformer, and 10+ examples of commonly used models in github code base.

NOTE: this feature is currently only available in Linux.

New Additions

  • A new example of SEAL+NGNN for OGBL datasets (#4550, #4772)
  • Add DeepWalk module (#4562)
  • A new example of BiPointNet for modelnet40 dataset (#4434)
  • Add Transformers related modules: Metapath2vec (#4660), LaplacianPosEnc (#4750), DegreeEncoder (#4742), ToLevi (#4884), BiasedMultiheadAttention (#4916), PathEncoder (#4956), GraphormerLayer (#4959), SpatialEncoder & SpatialEncoder3d (#4991)
  • Add Graph Positional Encoding Ops: double_radius_node_labeling (#4513), shortest_dist (#4799)
  • Add a new sample algorithm: (La)yer-Neigh(bor) sampling (#4668)

System Enhancement

  • Support PyTorch CUDA Stream (#4503)
  • Support canonical edge types in HeteroGraphConv (#4440)
  • Reduce Memory Consumption in Distributed Training Example (#4558)
  • Improve the performance of is_unibipartite (#4556)
  • Add options for padding and eigenvalues in Laplacian positional encoding transform (#4628)
  • Reduce startup overhead for dist training (#4735)
  • Add Heterogeneous Graph support for GNNExplainer (#4401)
  • Enable sampling with edge masks on homogeneous graph (#4748)
  • Enable save and load for Distributed Optimizer (#4752)
  • Add edge-wise message passing operators u_op_v (#4801)
  • Support bfloat16 (bf16) (#4648)
  • Accelerate CSRSliceMatrix<kDGLCUDA, IdType> by leveraging hashmap (#4924)
  • Decouple size of node/edge data files from nodes/edges_per_chunk entries in the metadata.json for Distributed Graph Partition Pipeline(#4930)
  • Canonical etypes are always used during partition and loading in distributed DGL(#4777, #4814).
  • Add parquet support for node/edge data in Distributed Partition Pipeline.(#4933)

Deprecation & Cleanup

Dependency Update

Starting from this release, we will drop support for CUDA 10.1 and 11.0. On windows, we will further drop support for CUDA 10.2.

Linux: CentOS 7+ / Ubuntu 18.04+

PyTorch ver. \ CUDA ver. 10.2 11.3 11.6 11.7
1.12  
1.13    

Windows: Windows 10+/Windows server 2016+

PyTorch ver. \ CUDA ver. 11.3 11.6 11.7
1.12  
1.13  

Bugfixes

  • Fix a bug related to EdgeDataLoader (#4497)
  • Fix graph structure corruption with transform (#4753)
  • Fix a bug causing UVA cannot work on old GPUs (#4781)
  • Fix NN modules crashing with non-FP32 inputs (#4829)

Installation

The installation URL and conda repository has changed for CUDA packages. Please use the following:

# If you installed dgl-cuXX pip wheel or dgl-cudaXX.X conda package, please uninstall them first.
pip install dgl -f https://data.dgl.ai/wheels/repo.html   # for CPU
pip install dgl -f https://data.dgl.ai/wheels/cuXX/repo.html   # for CUDA, XX = 102, 113, 116 or 117
conda install dgl -c dglteam   # for CPU
conda install dgl -c dglteam/label/cuXX   # for CUDA, XX = 102, 113, 116 or 117

Don't miss a new dgl release

NewReleases is sending notifications on new releases.