Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peer Review #3 #3

Open
distillpub-reviewers opened this issue Jun 1, 2021 · 0 comments
Open

Peer Review #3 #3

distillpub-reviewers opened this issue Jun 1, 2021 · 0 comments

Comments

@distillpub-reviewers
Copy link
Collaborator

The following peer review was solicited as part of the Distill review process.

The reviewer chose to keep anonymity. Distill offers reviewers a choice between anonymous review and offering reviews under their name. Non-anonymous review allows reviewers to get credit for the service they offer to the community.


General Comments

The paper gives an overview of GNNs, and their applications to different areas. The most novel aspect of the article, in my opinion, is that it exposes three different approaches to graph neural networks - global convolutions, local convolutions, and modern spatial convolutions - into a coherent framework. It's an idea that's intuitively appealing from a narrative framing perspective. There's a long period of setting up the math behind spectral GNNs and their relationship to ChebNet, and one wishes to see a big payoff in the third section about modern spatial convolutions. However, I think for the most part, the payoff is not there. The problem is that the ideas behind modern spatial convolutions are pretty easy to grasp without any reference to the graph Laplacian - message passing between immediate neighbors makes intuitive sense without reference to the eigenspectrum of the Laplacian. As such, there's a large amount of math that the reader has to go through to get to the third part - but it turns out the math is not really needed and there is no further reference to the graph Laplacian! I feel like the article thus subverts expectations, and the effort the reader puts into sections 1 and 2 aren't quite rewarded.

However, I do feel like the article is interesting as a historical perspective, and a roadmap for learning more about this area of research, which is valuable for a novice. Could the article work if sections 1 and 2 were shrunk down to focus on intuitive explanations? Or would it work if sections 3, 2, and 1 were reordered as 3, 1 and 2: message passing first, take the infinite limit next, then truncate. It could be worth exploring whether these changes would make the article flow better.

Notes for some of the figures:

  • In the Graph Laplacian section, the interactive applet. I found this applet inscrutable at first. What it does is show what happens when you mix different eigenvectors of the Laplacian. That's fine and valuable, but the problem is it never shows the eigenvectors in isolation. I think this would work better if you showed the eigenvectors above the sliders at the bottom. Then it's clear that you're mixing different eigenvectors.

  • In the From Global to Local Convolutions section, the interactive applet. I don't completely understand what this visualization tells me. One thing that doesn't help is that it has a similar layout to the previous one, but the weights with the sliders at the bottom mean different things, so it subverts expectations. Do higher powers of the Laplacian correspond to higher spatial frequencies? It's very hard to intuit with the visualization. One thing that doesn't help is that the area that's covered by the example image is tiny. It may be interesting to show an equivalent convolution kernel in addition to the image acted upon by the convolution kernel.

  • For the game of life section, I found that with an overparametrization factor of 10, the predictions were always perfect for every input for the GNN, regardless of whether I used the best or the worst seed. It kind of undermines the narrative that CNNs and GNNs have equal expressiveness.

Nits:

  • I don't think that the number of eigenvectors slider in The Graph Laplacian interactive adds to the narrative: it's a minor knob in the most visually prominent location in the visualization.
  • Under From Global to Local Convolutions , this sentence "let us look at the effect of multiplying a feature function x by the Laplacian L on its spectral representation \hat x", followed by the use of a hat across Lx threw me off. I would like more consistency about the use of hats.
  • There's a lot of use of exclamation points (32!) and I think it's stylistically jarring compared to other distill articles.
  • The text which introduces the interactive visualizations is a little poorly integrated, e.g. "To understand what these models have learned, we have created the interactive visualization below!". In other distill articles, the default state of the visualization shows a meaningful example, and there is a caption below to indicate what the figure is supposed to communicate.
  • It's not clear in the visualizations which are the important buttons to click. For instance in Interactive Graph Neural Networks the key action is Update All Nodes but it's not bolded or in a prominent location.
  • The visualizations take more than one screen on my biggest screen and have lots of whitespace. They could be made more compact.
  • Most of the examples are based on data that are well represented by images. Could you have an example that is not a random graph or an image, something that could only be meaningfully done with graph neural nets?

On a more positive note, I feel like the use of color in equations is a nice touch, and it helps follow the DAG of computation better. One thing I didn't realize is how the color scale was used in the Interactive Graph Neural Networks interactive visualization; it could be made more prominent by more different colors than just shades of red.

TL;DR: This is a valuable primer of the field of graph neural networks, covering applications, formulations, expressiveness, and ends a well-curated reading list. It offers a good overview of the field for the neophyte. The primer gets bogged down in its sections on local and global convolutions, which the authors will want to smooth over. The interactive visualizations in these sections are not particularly helpful, but could be made more so with additional visualization of the underlying eigenvectors and convolutional filters. Overall, a fine contribution to Distill.


Distill employs a reviewer worksheet as a help for reviewers.

The first three parts of this worksheet ask reviewers to rate a submission along certain dimensions on a scale from 1 to 5. While the scale meaning is consistently "higher is better", please read the explanations for our expectations for each score—we do not expect even exceptionally good papers to receive a perfect score in every category, and expect most papers to be around a 3 in most categories.

Any concerns or conflicts of interest that you are aware of?: No known conflicts of interest
What type of contributions does this article make?: Exposition on an emerging research direction

Advancing the Dialogue Score
How significant are these contributions? 4/5
Outstanding Communication Score
Article Structure 3/5
Writing Style 3/5
Diagram & Interface Style 3/5
Impact of diagrams / interfaces / tools for thought? 2/5
Readability 3/5

Comments on Readability

See general comments.

Scientific Correctness & Integrity Score
Are claims in the article well supported? 4/5
Does the article critically evaluate its limitations? How easily would a lay person understand them? 4/5
How easy would it be to replicate (or falsify) the results? 5/5
Does the article cite relevant work? 4/5
Does the article exhibit strong intellectual honesty and scientific hygiene? 4/5

Comments on Scientific Integrity

Sending the observable notebooks is a great step towards making remixes more accessible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant