Skip to content

Adjust float type of kernel functions used in propagate methods #70

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

ckendrick
Copy link
Collaborator

This expands further on #57 to also change the float types of any device functions used inside a propagate function to prevent casting.

@mtbc
Copy link
Collaborator

mtbc commented May 20, 2022

It's certainly useful progress in its thinking. After some discussion and experimentation, a couple of issues:

  1. It relies on DeviceFunction which is not available since numba 0.55 so we're going to have to find an alternative approach. I am having difficulty finding any official guidance on this matter.
  2. change_floattype("float32") leaves our code still full of float64 because, for instance, y = x / 2 with a float32 x gives a float64 y. We appear to get the desired effect with y = x / float32(2) but (a) then we need a generic float caster for whichever's been configured and (b) to litter our code with it.

@mtbc
Copy link
Collaborator

mtbc commented Jun 3, 2022

From inspecting the inferred types, seems crucial to decorate literals with constructors too, unfortunately. Could be we define ones for int and float where, e.g., change_floattype("float32") means also that,

x = y / 2.0

becomes,

x = y / float32(2.0)

(after importing it from numba)

So, we probably have to adjust/clutter our propagation code with such a parameterizable constructor of literals unless there's some cunning way a Python decorator can implement code preprocessing. Might also need to address it for calc_reflectivity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants