Skip to content

Commit

Permalink
Format conv docs
Browse files Browse the repository at this point in the history
  • Loading branch information
nyLiao committed Jul 14, 2024
1 parent 1a74578 commit b8e33a9
Show file tree
Hide file tree
Showing 23 changed files with 231 additions and 204 deletions.
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,9 @@
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.6.1/css/all.css">
<div align="center">
<a href="https://gdmnl.github.io/Spectral-GNN-Benchmark/"><img src="https://github.com/gdmnl/Spectral-GNN-Benchmark/actions/workflows/docs.yaml/badge.svg" alt="Docs"></a>
<a href="https://github.com/gdmnl/Spectral-GNN-Benchmark?tab=MIT-1-ov-file"><img src="https://img.shields.io/github/license/gdmnl/Spectral-GNN-Benchmark" alt="License"></a>
<a href="https://github.com/gdmnl/Spectral-GNN-Benchmark/releases/latest"><img src="https://img.shields.io/github/v/release/gdmnl/Spectral-GNN-Benchmark?include_prereleases" alt="Contrib"></a>
<a href="https://arxiv.org/abs/2406.09675"><img src="https://img.shields.io/badge/arXiv-2406.09675-b31b1b.svg?logo=arxiv" alt="arXiv"></a>
<a href="https://github.com/gdmnl/Spectral-GNN-Benchmark?tab=MIT-1-ov-file"><img src="https://img.shields.io/github/license/gdmnl/Spectral-GNN-Benchmark?logo=data:image/svg%2bxml;base64,PHN2ZyB2aWV3Qm94PSIwIDAgMjQgMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgb25jbGljaz0ibGltcGlhcl9jYW1wb3MoKTtzaG93cHJlc3VwdWVzdG8oZmFsc2UsdHJ1ZSwzKTsiIHN0eWxlPSJjdXJzb3I6cG9pbnRlciI+PGcgZmlsbD0iI2Y1ZjVmNSI+CgkJCTxwYXRoIGQ9Im0yMy45IDkuNy0zLjU0LTcuODktLjAwNS0uMDFhLjU0Mi41NDIgMCAwIDAtLjA0MS0uMDc2bC0uMDE0LS4wMThhLjUzMy41MzMgMCAwIDAtLjEyMi0uMTIybC0uMDE1LS4wMTFhLjUyOC41MjggMCAwIDAtLjA4LS4wNDRsLS4wMjQtLjAwOWEuNTI3LjUyNyAwIDAgMC0uMDY3LS4wMmwtLjAyOC0uMDA3YS41MjQuNTI0IDAgMCAwLS4wOTYtLjAxaC02Ljg1Yy0xLjAyLTEuNTItMS4wMi0xLjU0LTIgMGgtNi44NmEuNTQzLjU0MyAwIDAgMC0uMDk2LjAxbC0uMDI4LjAwN2EuNTE2LjUxNiAwIDAgMC0uMDY3LjAybC0uMDI0LjAxYS41MzcuNTM3IDAgMCAwLS4wOC4wNDNsLS4wMTUuMDExYS41MS41MSAwIDAgMC0uMDU3LjA0N2wtLjAyLjAyYS41NDMuNTQzIDAgMCAwLS4wNDUuMDU1bC0uMDE0LjAxOGEuNTIyLjUyMiAwIDAgMC0uMDQxLjA3NWwtLjAwNS4wMXYuMDAxTC4xMTYgOS43MmEuNTMxLjUzMSAwIDAgMC0uMDk2LjMwNGMwIDIuMjggMS44NiA0LjE0IDQuMTQgNC4xNHM0LjE0LTEuODYgNC4xNC00LjE0YS41My41MyAwIDAgMC0uMDk2LS4zMDRsLTMuMjUtNi4zNyA2LjA3LS4wMjN2MTguMmMtMi41NS4yOTQtNy4wMS4zODEtNyAyLjVoMTZjMC0yLjAzLTQuNDgtMi4yNy03LTIuNXYtMTguMWw1LjY5LS4wMi0yLjkyIDYuNDljMCAuMDAyIDAgLjAwMy0uMDAyLjAwNWwtLjAwNi4wMThhLjU0NS41NDUgMCAwIDAtLjAyMy4wNzVsLS4wMDUuMDJhLjUyNC41MjQgMCAwIDAtLjAxLjA5MnYuMDA4YzAgMi4yOCAxLjg2IDQuMTQgNC4xNCA0LjE0IDIuMjggMCA0LjE0LTEuODYgNC4xNC00LjE0YS41MjguNTI4IDAgMCAwLS4xMi0uMzMyeiI+PC9wYXRoPgo8L2c+PC9zdmc+" alt="License"></a>
<a href="https://github.com/gdmnl/Spectral-GNN-Benchmark/releases/latest"><img src="https://img.shields.io/github/v/release/gdmnl/Spectral-GNN-Benchmark?include_prereleases&logo=data:image/svg%2bxml;base64,PHN2ZyB2aWV3Qm94PSIwIDAgMTYgMTYiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGcgZmlsbD0iI2Y1ZjVmNSI+CgkJCTxwYXRoIGQ9Ik0xIDcuNzc1VjIuNzVDMSAxLjc4NCAxLjc4NCAxIDIuNzUgMWg1LjAyNWMuNDY0IDAgLjkxLjE4NCAxLjIzOC41MTNsNi4yNSA2LjI1YTEuNzUgMS43NSAwIDAgMSAwIDIuNDc0bC01LjAyNiA1LjAyNmExLjc1IDEuNzUgMCAwIDEtMi40NzQgMGwtNi4yNS02LjI1QTEuNzUyIDEuNzUyIDAgMCAxIDEgNy43NzVabTEuNSAwYzAgLjA2Ni4wMjYuMTMuMDczLjE3N2w2LjI1IDYuMjVhLjI1LjI1IDAgMCAwIC4zNTQgMGw1LjAyNS01LjAyNWEuMjUuMjUgMCAwIDAgMC0uMzU0bC02LjI1LTYuMjVhLjI1LjI1IDAgMCAwLS4xNzctLjA3M0gyLjc1YS4yNS4yNSAwIDAgMC0uMjUuMjVaTTYgNWExIDEgMCAxIDEgMCAyIDEgMSAwIDAgMSAwLTJaIj48L3BhdGg+CjwvZz48L3N2Zz4=" alt="Contrib"></a>
<a href="https://gdmnl.github.io/Spectral-GNN-Benchmark/_tutorial/installation.html"><img src="https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2Fgdmnl%2FSpectral-GNN-Benchmark%2Fmain%2Fpyproject.toml&logo=python&label=Python" alt="Python"></a>
<a href="https://gdmnl.github.io/Spectral-GNN-Benchmark/_tutorial/installation.html"><img src="https://img.shields.io/badge/PyTorch->=2.0-FF6F00?logo=pytorch" alt="PyTorch"></a>
</div>
Expand Down
4 changes: 2 additions & 2 deletions benchmark/dataset_process/linkx.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@

class LINKX(InMemoryDataset):
r"""
paper: Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods
ref: https://github.com/CUAI/Non-Homophily-Large-Scale/
:paper: Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods
:ref: https://github.com/CUAI/Non-Homophily-Large-Scale/
"""
_dataset_drive_url = {
'snap-patents.mat' : '1ldh23TSY1PwXia6dU0MYcpyEgX-w3Hia',
Expand Down
4 changes: 2 additions & 2 deletions benchmark/dataset_process/yandex.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@

class Yandex(InMemoryDataset):
r"""
paper: A critical look at the evaluation of GNNs under heterophily: are we really making progress?
ref: https://github.com/yandex-research/heterophilous-graphs
:paper: A critical look at the evaluation of GNNs under heterophily: are we really making progress?
:ref: https://github.com/yandex-research/heterophilous-graphs
"""
def __init__(
self,
Expand Down
4 changes: 2 additions & 2 deletions docs/source/_tutorial/configure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ Experiment Parameters

Refer to the help text by:

.. code-block:: bash
.. code-block:: console
python benchmark/run_single.py --help
$ python benchmark/run_single.py --help
--help show this help message and exit

Expand Down
26 changes: 13 additions & 13 deletions docs/source/_tutorial/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ Installation

This package can be easily installed by running `pip <https://pip.pypa.io/en/stable/>`__ at package root path:

.. code-block:: bash
.. code-block:: console
pip install -r requirements.txt
pip install -e .[benchmark]
$ pip install -r requirements.txt
$ pip install -e .[benchmark]
The installation script already covers the following core dependencies:

Expand All @@ -26,41 +26,41 @@ Only ``pyg_spectral`` Package

Install without any options:

.. code-block:: bash
.. code-block:: console
pip install -e .
$ pip install -e .
Benchmark Experiments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Install with ``[benchmark]`` option:

.. code-block:: bash
.. code-block:: console
pip install -e .[benchmark]
$ pip install -e .[benchmark]
Docs Development
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Install with ``[docs]`` option:

.. code-block:: bash
.. code-block:: console
pip install -e .[docs]
$ pip install -e .[docs]
C++ Backend
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1. Ensure C++ 11 is installed.

.. code-block:: bash
.. code-block:: console
gcc --version
$ gcc --version
2. Install with ``[cpp]`` option and environment variable ``PSFLAG_CPP=1``:

.. code-block:: bash
.. code-block:: console
export PSFLAG_CPP=1; pip install -e .[cpp]
$ export PSFLAG_CPP=1; pip install -e .[cpp]
.. [1] Please refer to the `official guide <https://pytorch.org/get-started/locally/>`__ if a specific CUDA version is required for PyTorch.
22 changes: 11 additions & 11 deletions docs/source/_tutorial/reproduce.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,38 +8,38 @@ Datasets will be automatically downloaded and processed by the code.

**Run full-batch models** (*Table 2, 8, 9*):

.. code-block:: bash
.. code-block:: console
cd benchmark
bash scripts/runfb.sh
$ cd benchmark
$ bash scripts/runfb.sh
**Run mini-batch models** (*Table 3, 10, 11*):

.. code-block:: bash
.. code-block:: console
bash scripts/runmb.sh
$ bash scripts/runmb.sh
Additional Experiments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**Effect of graph normalization** (*Figure 3, 9*):

.. code-block:: bash
.. code-block:: console
bash scripts/eval_degree.sh
$ bash scripts/eval_degree.sh
Figures can be plotted by: `benchmark/notebook/fig_degng.ipynb <https://github.com/gdmnl/Spectral-GNN-Benchmark/blob/main/benchmark/notebook/fig_degng.ipynb>`_.

**Effect of propagation hops** (*Figure 7, 8*):

.. code-block:: bash
.. code-block:: console
bash scripts/eval_hop.sh
$ bash scripts/eval_hop.sh
Figures can be plotted by: `benchmark/notebook/fig_hop.ipynb <https://github.com/gdmnl/Spectral-GNN-Benchmark/blob/main/benchmark/notebook/fig_hop.ipynb>`_.

**Frequency response** (*Table 12*):

.. code-block:: bash
.. code-block:: console
bash scripts/exp_filter.sh
$ bash scripts/exp_filter.sh
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@
napoleon_preprocess_types = True
autodoc_type_aliases = {
"Tensor": ":external:class:`Tensor <torch.Tensor>`",
# "SparseTensor": ":class:`torch_sparse.SparseTensor`",
"SparseTensor": ":external:func:`SparseTensor <torch.sparse_csr_tensor>`",
"pyg": "torch_geometric",
}
napoleon_type_aliases = autodoc_type_aliases
Expand Down
25 changes: 15 additions & 10 deletions pyg_spectral/nn/conv/acm_conv.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,15 @@
class ACMConv(BaseMP):
r"""Convolutional layer of FBGNN & ACMGNN(I & II).
paper: Revisiting Heterophily For Graph Neural Networks
paper: Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks
ref: https://github.com/SitaoLuan/ACM-GNN/blob/main/ACM-Geometric/layers.py
:paper: Revisiting Heterophily For Graph Neural Networks
:paper: Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks
:ref: https://github.com/SitaoLuan/ACM-GNN/blob/main/ACM-Geometric/layers.py
Args:
num_hops (int), hop (int): total and current number of propagation hops.
hop=0 explicitly handles x without propagation.
alpha (int): variant I (propagate first) or II (act first)
alpha: variant I (propagate first) or II (act first)
num_hops: total number of propagation hops.
hop: current number of propagation hops of this layer.
``hop=0`` explicitly handles :obj:`x` without propagation.
cached: whether cache the propagation matrix.
"""
supports_batch: bool = False
Expand All @@ -41,7 +40,9 @@ def __init__(self,
self.out_channels = out_channels

def _init_with_theta(self):
"""theta (nn.ModuleDict): Linear transformation for each scheme.
r"""
Attributes:
theta (torch.nn.ModuleDict): Linear transformation for each scheme.
"""
self.schemes = self.theta.keys()
self.n_scheme = len(self.schemes)
Expand Down Expand Up @@ -72,6 +73,10 @@ def _get_convolute_mat(self, x: Tensor, edge_index: Adj) -> dict:
return {'out': x}

def _forward_theta(self, x, scheme):
r"""
Attributes:
theta (torch.nn.ModuleDict): Linear transformation for each scheme.
"""
if callable(self.theta[scheme]):
return self.theta[scheme](x)
return self.theta[scheme] * x
Expand All @@ -83,7 +88,7 @@ def forward(self,
) -> dict:
r"""
Returns:
out (:math:`(|\mathcal{V}|, F)` Tensor): current propagation result
out (Tensor): current propagation result (shape: :math:`(|\mathcal{V}|, F)`)
prop_0, prop_1 (SparseTensor): propagation matrices
"""
h, a = {}, {}
Expand Down
26 changes: 14 additions & 12 deletions pyg_spectral/nn/conv/adj_conv.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,11 @@ class AdjConv(BaseMP):
r"""Linear filter using the normalized adjacency matrix for propagation.
Args:
alpha (float): additional scaling for self-loop in adjacency matrix
:math:`\mathbf{A} + \alpha\mathbf{I}`, i.e. `improved` in PyG GCNConv.
--- BaseMP Args ---
num_hops (int), hop (int): total and current number of propagation hops.
alpha: additional scaling for self-loop in adjacency matrix
:math:`\mathbf{A} + \alpha\mathbf{I}`, i.e. :obj:`improved` in
:class:`torch_geometric.nn.conv.GCNConv`.
num_hops: total number of propagation hops.
hop: current number of propagation hops of this layer.
cached: whether cache the propagation matrix.
"""
def __init__(self,
Expand All @@ -31,7 +32,7 @@ def _forward(self,
) -> tuple:
r"""
Returns:
x (:math:`(|\mathcal{V}|, F)` Tensor): current propagation result
x (Tensor): current propagation result (shape: :math:`(|\mathcal{V}|, F)`)
prop (Adj): propagation matrix
"""
if self.hop == 0 and not callable(self.theta):
Expand All @@ -47,15 +48,16 @@ def _forward(self,

class AdjDiffConv(AdjConv):
r"""Linear filter using the normalized adjacency matrix for propagation.
Preprocess the feature by distinguish matrix :math:`\beta\mathbf{L} + \mathbf{I}`.
Preprocess the feature by distinguish matrix :math:`\beta\mathbf{L} + \mathbf{I}`.
Args:
alpha (float): additional scaling for self-loop in adjacency matrix
:math:`\mathbf{A} + \alpha\mathbf{I}`, i.e. `improved` in PyG GCNConv.
beta (float): scaling for self-loop in distinguish matrix
alpha: additional scaling for self-loop in adjacency matrix
:math:`\mathbf{A} + \alpha\mathbf{I}`, i.e. :obj:`improved` in
:class:`torch_geometric.nn.conv.GCNConv`.
beta: scaling for self-loop in distinguish matrix
:math:`\beta\mathbf{L} + \mathbf{I}`
--- BaseMP Args ---
num_hops (int), hop (int): total and current number of propagation hops.
num_hops: total number of propagation hops.
hop: current number of propagation hops of this layer.
cached: whether cache the propagation matrix.
"""
def __init__(self,
Expand All @@ -75,7 +77,7 @@ def _forward(self,
) -> dict:
r"""
Returns:
x (:math:`(|\mathcal{V}|, F)` Tensor): current propagation result
x (Tensor): current propagation result (shape: :math:`(|\mathcal{V}|, F)`)
prop (Adj): propagation matrix
"""
if self.hop == 0:
Expand Down
Loading

0 comments on commit b8e33a9

Please sign in to comment.