0.6.1 is a minor release after 0.6.0 that includes some bug fixes, performance optimizations and minor feature updates.
OGB Large-scale Challenge Baselines
This release provides DGL-based baselines for the OGB Large Scale Challenge (https://ogb.stanford.edu/kddcup2021/), specifically the node classification (#2810) and graph classification (#2778) tasks.
For node classification in particular, we additionally provide the preprocessed author and institution features, as well as the homogenized graph for download.
System Support
- Tensoradapter now supports PyTorch 1.8.1.
Model Updates
- Boost then Convolve (#2740, credits to @nd7141)
- Distributed GPU training of RGCN (#2709)
- Variational Graph Auto-Encoders (#2587, #2727, credits to @JuliaSun623)
- InfoGraph (#2644, credits to @hengruizhang98)
- DimeNet++ (#2706, credits to @xnuohz)
- GNNExplainer (#2717, credits to @KounianhuaDu)
- Contrastive Multi-View Representation Learning on Graphs (#2739, credits to @hengruizhang98)
- Temporal Graph Networks (#2636, credits to @Ericcsr and thanks to @WangXuhongCN for reviewing)
- Tensorflow EdgeConv module (#2741, credits to @kyawlin)
- CompGCN (#2768, credits to @KounianhuaDu)
- JKNet (#2795, credits to @xnuohz)
Feature Updates
- dgl.nn.CFConv now supports unidirectional bipartite graphs, hence heterogeneous graphs (#2674)
- A QM9 Dataset variant with edge features (#2704 and #2801, credits to @hengruizhang98 and @milesial)
- Display error messages instead of error codes for TCP sockets (#2763)
- Add the ability of specifying the starting point for farthest point sampler (#2755, credits to @lygztq)
- Remove the specification of number of workers and servers in distributed training code and move them to launch script (#2775)
Performance Optimizations
- Optimize the order between message passing and feature transformation in GraphSAGE (#2747)
- Remove duplicate validation in dgl.graph creation (#2789)
- Replacing uniform integer sampling from std::unordered_set to linear search (#2710, credits to @pawelpiotrowicz)
- Automatically setting the number of OMP threads for distributed trainers (#2812)
- Prefer parallelized conversion to CSC from COO instead of transposing CSR (#2793)
Bug Fixes
- Prevents symbol collision of CUB with other libraries and removes thrust dependency (#2758, credits to @nv-dlasalle)
- Temporarily disabling CPU FP16 support due to incomplete code (#2783)
- GraphSAGE on graphs with zero edges produces NaNs (#2786, credits to @drsealks)
- Improvements of DiffPool example (#2730, credits to @lygztq)
- RGCN Link Prediction example sometimes runs beyond given number of epochs (#2757, credits to @turoger)
- Add pseudo code for dgl.nn.HeteroGraphConv to demonstrate how it works (#2729)
- The number of negative edges should be the same as positive edges (#2726, credits to @fang2hou)
- Fix dgl.nn.HeteroGraphConv that cannot be pickled (#2761)
- Add a default value for dgl.dataloading.BlockSampler (#2771, credits to @hengruizhang98)
- Update num_labels to num_classes in datasets (#2769, credits to @andyxukq)
- Remove unused and undefined function in SEAL example (#2791, credits to @ghk829)
- Fix HGT example where relation-specific value tensors are overwritten (#2796)
- Cleanup the process pool correctly when the process exits in distributed training (#2781)
- Fix feature type of ENZYMES in TUDataset (#2800)
- Documentation fixes (#2708, #2721, #2750, #2754, #2744, #2784, #2816, #2817, #2819, credits to @Padarn, @maqy1995, @Michael1015198808, @HuangLED, @xiamr, etc.)