Skip to content

Conversation

@simone-silvestri
Copy link
Collaborator

This PR introduces the precomputation of buoyancy gradients, which are required for most parameterization.
By doing this, we avoid heavyl interpolation of buoyancy computations (for example in slope computations or in Richardson number computations) speeding up significantly the parameterizations.

@simone-silvestri simone-silvestri added the benchmark performance runs preconfigured benchamarks and spits out timing label Oct 9, 2025
arch = architecture(grid)

# Maybe compute buoyancy gradients
compute_buoyancy_gradients!(buoyancy, grid, tracers; parameters = κ_parameters)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does it take "κ_parameters"? These are the parameters for the turbulence closure? It's not clear why we would use that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, I think we need some interior_parameters and surface_parameters.

Copy link
Member

@glwagner glwagner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good except for general confusion about the parameters and a few name change suggestions

@glwagner glwagner changed the title Precompute buoyancy gradients for use in parameterizations Materialize buoyancy gradients for use in parameterizations Oct 9, 2025
@simone-silvestri
Copy link
Collaborator Author

The difference between main and this PR is staggering for all the CATKE functions

On main

CATKE was the most expensive kernel
image

On this PR

Tendencies take back their role of most heavy computations in the model
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

benchmark performance runs preconfigured benchamarks and spits out timing

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants