Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test_mitgcm test failing with new dask release #1762

Open
VeckoTheGecko opened this issue Nov 13, 2024 · 3 comments
Open

test_mitgcm test failing with new dask release #1762

VeckoTheGecko opened this issue Nov 13, 2024 · 3 comments
Assignees
Labels

Comments

@VeckoTheGecko
Copy link
Contributor

docs/examples/example_dask_chunk_OCMs.py::test_mitgcm has been failing in CI this week with the release of dask v2024.11.0.

Downgrading to v2024.10.0 fixes the error, but it would be good to investigate and patch the test.

@erikvansebille
Copy link
Member

This has to do with the chunksize=auto-chunking. This is a quite tricky part of the code to comprehend; not sure it'll be easy to find why/what changed. Alternative is to relax the unit-test? The most important tests here are that the chunk_mode=specific* tests pass, which they still seem to do

if chunk_mode == "specific_same":
assert fieldset.gridset.size == 1
elif chunk_mode == "specific_different":
assert fieldset.gridset.size == 2

@VeckoTheGecko
Copy link
Contributor Author

VeckoTheGecko commented Nov 19, 2024

Yeah, I'm struggling to understand what the test (and underlying fieldfilebuffer.py) is doing. The fact that we have failure here:

if chunk_mode != "specific_different":
    assert len(fieldset.U.grid._load_chunk) == len(fieldset.V.grid._load_chunk)

leads me to think that when using auto chunking that for some reason the U and V grids don't have the same number of chunks. Not sure what the implications of that are though.

@erikvansebille any immediate ideas how do you think we could relax the tests? I think these tests could do with a refactor down the line.

xref #853

@erikvansebille
Copy link
Member

@erikvansebille any immediate ideas how do you think we could relax the tests? I think these tests could do with a refactor down the line.

Yes, a refactor would be good down the line; also when we get uxarray-support in Parcels v4? But until then, perhaps we can just xfail these tests?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Backlog
Development

No branches or pull requests

2 participants