github dmlc/dgl 0.8.0post2
v0.8.0post2

latest releases: v2.3.0, v2.2.1, v2.1.0...
2 years ago

This is a bugfix release including the following bugfixes:

Quality-of-life updates

  • Python 3.10 support.
  • PyTorch 1.11 support.
  • CUDA 11.5 support on Linux. Please install with
    pip install dgl-cu115 -f https://data.dgl.ai/wheels/repo.html  # if using pip
    conda install dgl-cuda11.5 -c dglteam  # if using conda
    
  • Compatibility to DLPack 0.6 in tensoradapter (#3803) for PyTorch 1.11
  • Set stacklevel=2 for dgl_warning (#3816)
  • Support custom datasets in DataLoader that are not necessarily tensors (#3810 @yinpeiqi )

Bug fixes

  • Pass ntype/etype into partition book when node/edge_split (#3828)
  • Fix multi-GPU RGCN example (#3871 @yaox12)
  • Send rpc messages blockingly in case of congestion (#3867). Note that this fix would probably cause speed regression in distributed DGL training. We were still finding the root cause of the underlying issue in #3881.
  • Fix CopyToSharedMem assuming that all relation graphs are homogeneous (#3841)
  • Fix HAN example crashing with CUDA (#3841)
  • Fix UVA sampling crash without specifying prefetching features (#3862)
  • Fix documentation display issue of node/edge_split (#3858)
  • Fix device mismatch error in GraphSAGE distributed training example under multi-node multi-GPU (#3870)
  • Use torch.distributed.algorithms.join.Join to deal with uneven training sets in distributed training (#3870)
  • Dataloader documentation fixes (#3886)
  • Remove redundant reference of networkx package in pagerank.py (#3888 @AzureLeon1 )
  • Make source build work for systems where the default is Python 2 (#3718)
  • Fix UVA sampling with partially specified node types (#3897)

Don't miss a new dgl release

NewReleases is sending notifications on new releases.