-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some miscellaneous feedback #3
Comments
Ok, probably not the sin/cos functions since Victoria and I both use those also. Take a look at this link and note that you want to use const whenever possible (doesn't appear you are doing this currently). |
Just a quick question here: where does the 10sec |
It probably comes from a suggestion of mine that my simulations (in the laminar phase) take timesteps of O(10 seconds). I believe @loganpknudsen has tracked this down to the use of very large interior velocities and the fact that the CFL condition now includes those in the calculation. Actually perhaps they shouldn't do so in the case of a background flow in a 'flat' dimension, but not a big deal. |
Yeah I agree they shouldn't. I remember seeing some discussion about the CFL calculation a few months ago which is probably where they changed it. It might be worth raising an issue at some point since it's an easy fix. |
Also, just reinforcing what I had already mentioned to @loganpknudsen before: I think the strategy of progressively turning off things from the code to see where the slowdown is coming from is probably the way to go if you haven't yet managed to speed up the code. |
Also², @loganpknudsen last we talked you were gonna time how long each time-step was taking on average (which you can do either using Oceanostics or just letting the whole thing run and dividing by the number of timesteps). Did you manage to do it? For reference, my very complex headland simulations running on 100 million grid points take about 1.7 seconds per time step on an A100 GPU and about 2.5 seconds on V100s. Related to that: like @wenegrat said, your velocity was pretty large, likely contributing to a small Δt, in which case the slowdown is "physical". What's the average Δt in your simulations? (Or I guess a better question: how does it evolve?) |
I will get some answers on this once I get things running again, sorry just saw this. I can try switching to the A100 gpu |
BottomBoundaryLayer/BBL_with_oscillations_code_GPU_check.jl
Line 57 in bc0f594
This is a large velocity. You could very easily reduce this by a factor of 5 (ie. make it 10 cm/s) and have a realistic DWBC type flow.
BottomBoundaryLayer/BBL_with_oscillations_code_GPU_check.jl
Line 59 in bc0f594
Likewise this is a very strong stratification (I assume you chose this because your velocity was so large). Even an order of magnitude smaller would be representative of a strong pycnocline, whereas a deep ocean value would be 3 orders of magnitude smaller (10^{-7}).
BottomBoundaryLayer/BBL_with_oscillations_code_GPU_check.jl
Line 121 in bc0f594
I would look at the output and make sure that after the first handful of timesteps the model is using the max$\Delta t$ . My expectation is that it should be (since I think everything should be laminar in this setup).
BottomBoundaryLayer/BBL_with_oscillations_code_GPU_check.jl
Line 73 in bc0f594
This line (along with the 2 that follow, involving sin and cos) might be culprits for the slow down, as I believe I recall Tomas mentioning the GPU doesn't like trig functions.
The way I would probably debug this is (making sure at each step it runs fast)
Let me know what you find!
The text was updated successfully, but these errors were encountered: