0.7.1 Release Notes
0.7.1 is a minor release with multiple fixes and a few new models/features/optimizations included as follows.
Note: We noticed that 0.7.1 for Linux is unavailable on our anaconda repository. We are currently working on this issue. For now, please use pip installation instead.
New models
- GCN-based spam review detection (#3145, @kayzliu)
- CARE-GNN (#3187, @kayzliu)
- GeniePath (#3199, @kayzliu)
- EEG-GCNN (#3186, @JOHNW02)
- EvolveGCN (#3190, @maqy1995)
New Features
- Allows providing username in
tools/launch.py
(#3202, @erickim555) - Refactor and allows customized Python binary names in
tools/launch.py
(#3205, @erickim555) - Add support for distributed preprocessing for heterogeneous graphs (#3137, @ankit-garg)
- Correctly pass all DGL client server environment variables for user-defined multi-command (#3245, @erickim555)
- You can configure the DGL configuration directory with environment variable
DGLDEFAULTDIR
(#3277, @konstantino)
Optimizations
- Improve usage of pinned memory in sparse optimizer (#3207, @nv-dlasalle)
- Optimized counting of nonzero entries of DistTensor (#3203, @freeliuzc)
- Remove activation cache if not required (#3258)
- Edge excluding in EdgeDataLoader on GPU (#3226, @nv-dlasalle)
Fixes
- Update numbers for HiLANDER model (#3175)
- New training and test scripts for HiLANDER (#3180)
- Fix potential starving in socket receiver (#3176, @JingchengYu94)
- Fix typo in Tensorflow backend (#3182, @lululxvi)
- Add WeightBasis documentation (#3189)
- Default ntypes/etypes consistency between
dgl.DGLGraph
anddgl.graph
(#3198) - Set sharing strategy for SEAL example (#3167, @KounianhuaDu)
- Remove
DGL_LOADALL
in doc builds (#3150, @lululxvi) - Fix distributed training hang with multiple samplers (#3169)
- Fix
random_walk
documentation inconsistency (#3188) - Fix
curand_init()
calls in rowwise sampling leading to not-so-random results (#3196, @nv-dlasalle) - Fix
force_reload
parameter ofFraudDataset
(#3210, @Orion-wyc) - Fix check for
num_workers
for usingScalarDataBatcher
(#3219, @nv-dlasalle) - Tensoradapter linking issues (#3225, #3246, @nv-dlasalle)
- Diffpool loss did not consider the loss of first diffpooling layer (#3233, @yinpeiqi)
- Fix CUDA 11.1 SPMM crashing with duplicate edges (#3265)
- Fix
DotGatConv
attention bug when computingedge_softmax
(#3272, @Flawless1202) RelGraphConv
reshape argument is incorrect (#3256, @minchenGrab)- Documentation typos and fixes (#3214, #3221, #3244, #3231, #3261, #3264, #3275, #3285, @amorehead, @blokhinnv, @kalinin-sanja)