-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sph projections, slices, gridding: backend and off-axis #4939
Conversation
Some nice test images for the off-axis projection: the I haven't added these as formal answer tests, it's just a visual verification that the outcomes make sense on real data. |
@nastasha-w looks like there are a couple defects at the moment. If you need a hand to get CI green let me know ! |
Thanks @neutrinoceros! I think the main issue now is that I branched this off main like 6 months ago, so I'll try merging the lastest version in first. I do also think one or a few answer tests from the previous version should fail: these use the old SPH axis-aligned projections. |
Nothing a rebase can't fix. It shouldn't be risky actually since you don't have any conflicts. |
pre-commit.ci autofix |
2 similar comments
pre-commit.ci autofix |
pre-commit.ci autofix |
@neutrinoceros , could you take a look at workflow approval for this PR? I've been submitting a lot of commits recently and I suspect it stopped running everything automatically for that reason. Basically, I've been wrestling with the type-checker a lot today and yesterday. |
Plot twist: it never ran automatically, it's always me manually approving. 😅 |
I took the liberty of pushing to your branch to resolve 2 small bugs that caused a few hundreds tests to fail. I'm hoping to get a much clearer picture of what's still broken after that. |
ccd3beb
to
9fa337d
Compare
On the github actions side we're left with 81 failures that all seem to have one root cause: a divide-by-zero error. Let me know if you need a hand there too, otherwise I'll let you fix it. |
@neutrinoceros Thanks for the help! I think I've got those divide-by-zero errors now. I'll try to follow the instructions on the website for updating the image tests. |
pre-commit.ci autofix |
@nastasha-w just a note that I have been keeping track of this and I plan on reviewing it once this has been sorted out. |
@jzuhone Thanks! It feels like I'm getting close, but then it's felt like that for a few days already |
pre-commit.ci autofix |
@neutrinoceros could you help me with the image tests? Under the full nose test suite, the only failing tests now seem to be comparisons of previously generated images of SPH projections to the figures currently produced. Changes are expected there because I changed that backend. I have looked into the instructions in the docs for updating these tests here, but these don't seem to work for me. I have downloaded the image overview from the “Build and Tests” job summary page, but it seems like those don't cover the "full testsuite (nose)" tests where I'm getting the errors. This section here suggests updating the |
Sorry if this is confusing: these are the instruction for updating pytest-based image tests; however, Jenkins is still running nosetest for tests that require real (heavy) datasets.
that's even worse... I believe this part of our documentation was never actually in sync with our practice (it was introduced as part of a substantial but incomplete effort to migrate our test suite to pytest). Disregard it entirely. So the actual way we update answers with nose is by bumping version numbers in before (as of yt 4.3)after (as of this branch)can you explain what's going on ? |
And about docs builds:
Looking at the console output it is also not clear to me that the error is actually related to your PR. Something to be aware of is that we only run docs builds on PRs that actively modify or introduce documentation, so it's possible that sometimes a bug slip into a PR that doesn't do that but still breaks documentation builds. This seems unlikely here given that the only PR that was merged between your last succesful run and now is #4945 and looks even less related. |
here's what a systematic diffing shows 35c35
< dask-2024.7.0
---
> dask-2024.7.1
137c137
< pytest-8.2.2
---
> pytest-8.3.1
166c166
< sphinxcontrib-htmlhelp-2.0.5
---
> sphinxcontrib-htmlhelp-2.0.6
169c169
< sphinxcontrib-qthelp-1.0.7
---
> sphinxcontrib-qthelp-1.0.8 maybe we could try pinning
and see what happens. In fact, I'll try this now (again, taking the liberty to push to your branch) |
Looks like it indeed stabilized the builds. I'll be working on a finer and more stable solution in #4952 |
Also see #4950 about the (unrelated) failure seen on Windows |
To my surprise, I couldn't reproduce the failure in #4952. Let me try to remove my last commit here. |
6dbad7c
to
ab47d98
Compare
With the test number bump, many of the previously failing tests now pass. However, for arepo frontend tests specifically, the tests fail with error "There is no old answer available.". Does anyone have any idea what the problem could be, or how to fix it? |
oh, could this be showing some form of corruption in the database created by not letting the first job finish ? |
@neutrinoceros that's almost certainly what it is. @Xarthisius needs to fix it |
The checks passed!! |
@chummels said he was going to have a look at this. |
Let's move it to the 4.4.0 milestone then 🤞🏻 |
I'm going to merge this by COB Monday unless someone explicitly jumps in. We have two approvals and it's ready to go. |
I'm looking at this today and will have comments ASAP. |
I'm running tests locally and not seeing any difference in behavior for OffAxisProjections and ProjectionPlots between the dev tip and this PR. I'm using the FIRE m12i z=0 snapshot at a variety of resolutions and for both the ('gas', 'density') and ('gas', 'H_p0_number_density') fields, but getting identical results, so I must be doing something wrong on this side. I would think as I stepped out in resolution to something like width=(500, 'kpc') for the projection, then some of the smaller particles would get dropped as per the description in the PR, but the results look identical. I'm just doing a sanity check on all of this, since the PR is so detailed that it is difficult to review thoroughly by simple code review. |
@chummels just to get this out of the way: did you make sure to re-compile after switching branches ? |
@chummels since you mentioned the width, I also wonder whether you're increasing the grid size (number of pixels) in tandem with the image width (i.e., constant width per pixel), or increasing the grid size with constant image width (decreasing width per pixel). The first would not give a resolution effect with either method, and the differences between the methods would depend on which width per pixel you chose (larger -> bigger difference). The second would give the effect I was worried about. (I'm guessing you thought of this, it's just to be sure.) |
Yes, I update between the branches, then |
I was merely changing the width of the ProjectionPlot and not explicitly changing the number of pixels in the resulting image. It seems to me that by changing the width of the ProjectionPlot and keeping the number of pixels fixed, it would change the physical pixel size. It was my understanding from the PR that this would result in different effects when the pixel size became increasingly large relative to the size of the fluid elements, that small fluid elements would be dropped from each pixel's line integral and there would be a change in image. |
Yes, changing the width and keeping the number of pixels fixed should lead to resolution-dependent effects. I'm just not sure it would be as easy to see those effects by eye. Are you comparing pixel values directly here? |
I tried a similar test myself:
running on the The first difference I found is that the |
I also downloaded the FIRE m12i data from the yt hub (FIRE_M12i_ref11), and got the following images for x and z projections of gas density and H I number density: |
Here are those same tests, but only showing the part of the image from the highest zoom level in each panel. The differences are more apparent here, including that the central 'dot' in in |
It's naturally harder to see these kind of artifacts when looking in a logarithmic density space. However, the inaccuracies are always there (as shown by Natasha's plots). I ran into this problem when looking at convergence in this paper https://ui.adsabs.harvard.edu/abs/2021arXiv210605281B/abstract (referenced in the other thread). Specifically this figure: Looking at the two you don't see differences immediately, but the numerical discrepancies are there. They sensitively depend on the ratio between the typical smoothing length and the pixel size, as Natasha's plots show. |
So to follow up on this, I guess my old install of yt was quite old indeed. I removed yt and conda and all dependencies and reinstalled everything from scratch, and now I see that this PR gives the desired behavior. Here are some quick and dirty plots showing that this PR does in fact address the resolution issue with consistent behavior. FIRE_m12i: AREPO Cluster: Nominally, I think everything is good to go but maybe best to chat in a few hours about all of this before merge? |
@chummels yeah, I think this would be a good discussion to have, on which methods are desirable if nothing else. I'm glad it worked out, but yeah, those installations of different versions are consistently a pain in the heinie. |
Where did we land with this at the meeting? |
Basically, merge the sph_proj_backend branch with the backend fixes, but with a few small action items for other pull requests:
- add a bit of info on the backend method, and differences between the SPH and AMR/grid backends, to the narrative docs and the docstrings
- clean up some testing functions that are duplicate between the sph_proj_backend PR and the smaller PR for the Ray backend fix (inputting squared vs. not-squared normalized impact parameters into the backend function).
And we decided it would be a good idea to add a different backend for SPH projections as the default, with the pixel center line-of-sight integrations as an alternative option. The new default would average the total contributions from all SPH particles to the rectangular prism defined by each grid pixel over that pixel. E.g., for the a column density, the density of each particle would be multiplied by the integral of the SPH kernel over the pixel rectangular prism, and then the total of those masses would be divided by the pixel surface area. The current method integrates the kernel over a pencil beam through the pixel center and multiples the density by that in stead. Both methods are valid ways of computing a column density, which different methods preferred for different applications. However, the rectangular prism integral is more intuitive, and more similar to the method used for AMR and grid data. This would be another larger PR.
We should probably open issues for these.
Kind regards,
Nastasha
… On Sep 14, 2024, at 14:32, John ZuHone ***@***.***> wrote:
Where did we land with this at the meeting?
—
Reply to this email directly, view it on GitHub <#4939 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AIQ5FQLZI4W4BUPKY57HDMTZWQUF3AVCNFSM6AAAAABKKVFYU2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJQHE3TKMBSGA>.
You are receiving this because you were mentioned.
|
Fixes issues in the SPH projection backend and adds off-axis projection for SPH particles.
PR Summary
The current version removes the addition of small SPH particles to column densities along lines of sight those particles do not intersect. Additionally, this version supports off-axis projections for SPH datasets.
Similar issues in slice plots and grid value interpolation (scatter method) for SPH datasets are still TODO. Tests are also currently in a separate script sph_pixelization_tests_2024-07-03.py.zip, so I can run the tests on the main and issue branches.
This addresses some of the points in issue #4788 ; it should eventually address all of them.
PR Checklist