Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster bound #149

Merged
merged 57 commits into from
Jan 10, 2016
Merged

Faster bound #149

merged 57 commits into from
Jan 10, 2016

Conversation

mjberger
Copy link
Contributor

as the name says, and is sometimes substantially faster for large numbers of grids. HAs to go with matching geoclaw fasterBound branch. Also has lots of other tiny fixes, mostly format statement enlargements, to also go with larger numbers of grids

@rjleveque
Copy link
Member

travis is showing an error in tests/advection_3d_swirl. I ran this branch and master in that directory and notice that fort.q0000 disagrees already because src/3d/amr_module.f90 has max1d = 32 instead of 60. Was that change intended?

But there must be more differences based on the travis error (which I also get on my laptop):

AssertionError:
  new_data: [648.73151345888198],
  expected: 32000.0

@rjleveque
Copy link
Member

After changing max1d back to 60 I now get the error

AssertionError: Full gauge match failed.

in tests/advection_3d_swirl. This seems to be from a bug in the testing framework: When I run

     python regression_test.py True

to regenerate the regression data and then do a git diff I get the results below. All the same data, but the gauges are printed in a different order.

regression

@mjberger
Copy link
Contributor Author

i think they were run with different numbers of cores. When I ran them with the ssame number it agreed completely.

But what is the 32000 from?

On Oct 15, 2015, at 8:00 PM, Randall J. LeVeque [email protected] wrote:

After changing max1d back to 60 I now get the error

AssertionError: Full gauge match failed.
in tests/advection_3d_swirl. This seems to be from a bug in the testing framework: When I run

 python regression_test.py True

to regenerate the regression data and then do a git diff I get the results below. All the same data, but the gauges are printed in a different order.


Reply to this email directly or view it on GitHub.

@rjleveque
Copy link
Member

I think even with the same number of cores (> 1) it could be random which gauge prints first at a given time step if they are on grids handled by different cores. At any rate this indicates a problem with our test framework.

On my laptop the tests now pass if I set OMP_NUM_THREADS to 1. I don't know how many threads travis uses. The 32000 comes from summing data I think, but it would be good if @mandli could take a look at what's going on.

@mjberger
Copy link
Contributor Author

As you say the gauge output definitely depends on random ordering of grids and . When I got the same answer with both branches I was running on a single core.

Don’t know where the 32000 came from though. Is that from before the filpatch bug was fixed?

Marsha

On Oct 16, 2015, at 12:01 PM, Randall J. LeVeque [email protected] wrote:

I think even with the same number of cores (> 1) it could be random which gauge prints first at a given time step if they are on grids handled by different cores. At any rate this indicates a problem with our test framework.

On my laptop the tests now pass if I set OMP_NUM_THREADS to 1. I don't know how many threads travis uses. The 32000 comes from summing data I think, but it would be good if @mandli could take a look at what's going on.


Reply to this email directly or view it on GitHub.

@mandli
Copy link
Member

mandli commented Oct 17, 2015

The testing framework should compare individual gauges, it does this for the summation test but not the full one (probably because I never tested this with two gauges). I raised an issue clawpack/clawutil#89.

FasterBound updated to incorporate 5.3.1 changes to master
@rjleveque
Copy link
Member

After consulting with @mandli and @mjberger, decided this is ready to merge. Note that geoclaw will be temporarily broken on master until clawpack/geoclaw#173 is merged in. Also need to update clawpack/clawpack to make these work together.

rjleveque added a commit that referenced this pull request Jan 10, 2016
@rjleveque rjleveque merged commit 720dc14 into clawpack:master Jan 10, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants