github dmlc/dgl 0.5.0

latest releases: v2.1.0, v2.0.0, v1.1.3...
3 years ago

This is a major release including new documentation, distributed GNN training support, more new features and models, as well as bug fixes and more system performance improvement. Note that this is a huge update and may break some of the existing codes; see the Migration Guide for details.

New Documentations

  • A new user guide that explains the core concepts of DGL including graphs, features, message passing, DGL datasets, full-graph training, stochastic training, and distributed training.
  • Re-worked the API reference manual.

Distributed Training

DGL now supports training GNNs on large graphs distributed across multiple machines. The new components are under the dgl.distributed package. The user guide chapter and the API document page describe the usage. New end-to-end examples for distributed training:

  • An example for training GraphSAGE using neighbor sampling on ogbn-product and ogbn-paper100M (100M nodes, 1B edges). Included scripts for both supervised and unsupervised training, and offline inference. The training takes 12 seconds per epoch for ogbn-paper100M on a cluster of 4 m5n.24xlarge instances, and achieves 64% accuracy.
  • An example for training R-GCN using neighbor sampling on ogbn-mag. Included scripts for both inductive and transductive modeling. The training takes 841 seconds per epoch on a cluster of 4 m5n.24xlarge CPU machines , and achieves 42.32% accuracy.

New Features

Core data structure

  • Merged DGLGraph and DGLHeteroGraph. DGLGraph now supports nodes and edges of different types.
  • All the APIs on the old DGLGraph are now compatible with heterogeneous graphs. They include
    • Mutation operations such as adding or removing nodes and edges.
    • Graph transformation routines such as dgl.reverse() dgl.to_bidirected()
    • Subgraph extraction routines.
    • dgl.save_graphs() and dgl.load_graphs()
    • Batching and reading out operators.
  • DGL now supports creating graph stored in int32 to further conserve memory. Three new APIs: DGLGraph.idtype, DGLGraph.int, DGLGraph.long for getting or changing the integer type for storing graph.
  • DGL now allows performing graph structure relation operations on GPU such as DGLGraph.in_degrees(), DGLGraph.edge_ids() , DGLGraph.subgraph etc. A new API DGLGraph.to to copy a graph to different devices. This leads to a breaking change on requiring the graph and feature tensors to always be on the same device. See the Migration Guide for more explanations.
  • Many graph transformations and subgraph extraction operations in DGL now automatically copy the corresponding node and edge features from the original graph. The copying happens on-demand, meaning that the copy would not take place until you actually accesses the feature.
    • Before 0.5
    >>> g = dgl.graph(([0, 1, 2], [3, 4, 5]))
    >>> g.ndata['x'] = torch.arange(12).view(6, 2)
    >>> sg = g.subgraph([0, 1])    # sg does not have feature 'x'
    >>> 'x' in sg.ndata
    False
    • From 0.5
    >>> g = dgl.graph(([0, 1, 2], [3, 4, 5]))
    >>> g.ndata['x'] = torch.arange(12).view(6, 2)
    >>> sg = g.subgraph([0, 1])    # sg inherits feature 'x' from 'g'
    >>> 'x' in sg.ndata
    True
    >>> print(sg.ndata['x'])       # the actual copy happens at here
    tensor([[0, 1],
            [1, 2]])
  • DGL’s message passing operations (e.g., DGLGraph.update_all, DGLGraph.apply_edges etc.) now support higher-order gradients when the backend is PyTorch.
  • DGLGraph.subgraph() and DGLGraph.edge_subgraph() now accept boolean tensors or dictionary of boolean tensors as input.
  • Min and max aggregators now return 0 instead of a large number for zero-degree nodes to improve training experience.
  • DGL kernels and readout functions are now deterministic.

GNN training utilities

  • New classes: dgl.dataloading.NodeDataLoader and dgl.dataloading.EdgeDataLoader for stochastic training of node classification, edge classification, and link prediction with neighborhood sampling on a large graph. Both classes are similar to PyTorch DataLoader classes to allow easy customization of the neighborhood sampling strategy.
  • DGL neural networks now support feeding in a single tensor together with a block as input.
    • Previously, to perform message passing on a block, you need to always feed in a pair of features as input, representing the features of input and output nodes like the following:
    # Assuming that h is a 2D tensor representing the input node features
      def forward(self, blocks, h):
          for layer, block in zip(self.layers, blocks):
              h_dst = h[:block.number_of_dst_nodes()]
              h = layer(block, (h, h_dst))
          return h
    • Now, you only need to feed in a single tensor if the input graph is a block.
    # Assuming that h is a 2D tensor representing the input node features
      def forward(self, blocks, h):
          for layer, block in zip(self.layers, blocks):
              h = layer(block, h)
          return h
  • Added a check for zero-degree nodes to the following modules to prevent potential accuracy degradation. To prevent the error, either fix it by adding self-loops (using dgl.add_self_loop) or passing allow_zero_in_degree=True to suppress it.
    • GraphConv, GATConv, EdgeConv, SGConv, GMMConv, AGNNConv, DotGatConv

New APIs

  • dgl.add_reverse_edges() adds reverse edges for a heterogeneous graph. It works on all edge types whose source node type is the same as its destination node type.
  • DGLGraph.shared_memory for copying the graph to shared memory.

New Models

Requirement Update

  • For PyTorch users, DGL now requires torch >= 1.5.0
  • For MXNet users, DGL now requires mxnet >= 1.6
  • For TensorFlow users, DGL now requires tensorflow >= 2.3
  • Deprecate support for Python 3.5. Add support for Python 3.8. DGL now supports Python 3.6-3.8.
  • Add support for CUDA 10.2
  • For users that build DGL from source
    • On Linux: libstdc++.so.6.0.19 or later, or equivalently Ubuntu 14.04 or later, CentOS 7 or later.
    • On Windows: Windows 10 or Windows server 2016 or later
    • On Mac: 10.9 or later

Compatibility issues

Pickle files created in versions 0.4.3post2 or earlier cannot be loaded by 0.5.0. For now, you need to load the graph structure with 0.4.3post2, and save the graph structure as tensors, and reconstruct them with DGL 0.5.

Don't miss a new dgl release

NewReleases is sending notifications on new releases.