Skip to content

v1.1.0

Compare
Choose a tag to compare
@czkkkkkk czkkkkkk released this 05 May 08:50
· 1400 commits to master since this release

What's new

  • Sparse API improvement
  • Datasets for evaluating graph transformers and graph learning under heterophily
  • Modules and utilities, including Cugraph convolution modules and SubgraphX
  • Graph transformer deprecation
  • Performance improvement
  • Extended BF16 data type to support 4th Generation Intel® Xeon® Scalable Processors (#5497)

Detailed breakdown

Sparse API improvement (@czkkkkkk )

SparseMatrix class

  • Merge DiagMatrix class into SparseMatrix class, where the diagonal matrix is stored as a sparse matrix and inherits all the operators from sparse matrix. (#5367)
  • Support converting DGLGraph to SparseMatrix.g.adj(self, etype=None, eweight_name=None) returns the sparse matrix representation of the DGL graph g on the edge type etype and edge weight eweight_name. (#5372)
  • Enable zero-overhead conversion between Pytorch sparse tensors and SparseMatrix via dgl.sparse.to_torch_sparse_coo/csr/csc and dgl.sparse.from_torch_sparse. (#5373)

SparseMatrix operators

  • Support element-wise multiplication on two sparse matrices with different sparsities, e.g., A * B. (#5368)
  • Support element-wise division on two sparse matrices with the same sparsity, e.g., A / B. (#5369)
  • Support broadcast operators on a sparse matrix and a 1-D tensor via dgl.sparse.broadcast_add/sub/mul/div. (#5370)
  • Support column-wise softmax. (#5371)

SparseMatrix examples

  • Example for Heterogeneous Graph Attention Networks (#5568, @mufeili )

Datasets

Modules and utilities

Deprecation (#5100, @rudongyu )

  • laplacian_pe is deprecated and replaced by lap_pe
  • LaplacianPE is deprecated and replaced by LapPE
  • LaplacianPosEnc is deprecated and replaced by LapPosEncoder
  • BiasedMultiheadAttention is deprecated and replaced by BiasedMHA

Performance improvement

Speedup the CPU to_block function in graph sampling. (#5305, @peizhou001 )

  • Add a concurrent hash map to speed up the id mapping process by leveraging multi-thread capability (#5241, #5304).
  • Accelerate the expensive to_block by using the new hash map, improve the performance by ~2.5x on average and more when the batch size is large.

Breaking changes

  • Since the new .adj() function of DGLGraph produces a SparseMatrix, the original .adj(self, transpose=False, ctx=F.cpu(), scipy_fmt=None, etype=None) is renamed as .adj_external, returning the sparse format from external packages such as Scipy and Pytorch. (#5372)