Skip to content

Conversation

@lacoak21
Copy link
Contributor

@lacoak21 lacoak21 commented Dec 9, 2025

Change Summary

Overview

Create ultra helio index maps on the fly to be used in the calculation of the exposure factor, efficiencies and geometric function. This is an alternative to reading in many ancillary files. It was decided that writing the code to do this is best incase we need to change it.

Most of the code to produce the index map was written by Claude after importing the Java code that nick uses. There were a few small issues to fix after this.

Updated Files

  • imap_processing/ultra/constants.py
    • Add constants for helio index map creation
  • imap_processing/ultra/l1b/lookup_utils.py
    • Fix inside fov function to match Nicks version (from java code)
  • imap_processing/ultra/l1c/helio_pset.py
    • make index maps on the fly for helio psets.
      -imap_processing/ultra/l1c/l1c_lookup_utils.py
    • Fix valid pixel calculation to handle energy dependent index tables.
  • imap_processing/ultra/l1c/make_helio_index_maps.py
    • code for calculating the helio index maps
  • imap_processing/ultra/l1c/ultra_l1c_pset_bins.py
  • fix code to use energy dependent index maps

Testing

  • validation test for the index maps
  • validation test for helio psets for efficiency and sensitivity values
  • fixed other tests that were effected by the changes

@lacoak21 lacoak21 requested a review from greglucas December 9, 2025 14:55
@lacoak21 lacoak21 self-assigned this Dec 9, 2025
@lacoak21 lacoak21 added this to IMAP Dec 9, 2025
boundary_scale_factors = (
pd.read_csv(ancillary_files[bsf_descriptor], header=None, skiprows=1)
.to_numpy(dtype=float)[:, 2:]
.T
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to transpose all these arrays to make them follow the dimension order that everything else follows.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are transposing after the indexing though. Aren't you getting different values now? Before you were getting arr[:, :2], now you are doing arr[:, 2:].T.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the code downstream to handle the transposed values.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still confused though, you are getting fundamentally different values out of the file itself aren't you? np.arange(4)[:2] != np.arange(4)[2:]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OH! I see. Wow yikes good catch. I need to go understand why the test didnt catch this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I was doing that just for the ra_and_dec variable.

The other variables were [:, 2:] and are still [:, 2:]

It used to be

index_grid = pd.read_csv(
        ancillary_files[index_descriptor], header=None, skiprows=1
    ).to_numpy(dtype=float)
ra_and_dec = index_grid[:, :2]  

but now its

index_grid = pd.read_csv(
        ancillary_files[index_descriptor], header=None, skiprows=1
    ).to_numpy(dtype=float).T
ra_and_dec = index_grid[:2, :]

Comment on lines +51 to +62
nspins = 5522
nominal_spin_seconds = 15.0
spin_data = pd.DataFrame(
{
"spin_start_met": np.linspace(
pointing_range_met[0], pointing_range_met[1], nspins
),
"spin_period_sec": np.full(nspins, nominal_spin_seconds),
"spin_phase_valid": np.ones(nspins),
"spin_period_valid": np.ones(nspins),
}
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a utility function use_fake_spin_data_for_time and generate_fake_spin_data that youmight be able to use here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use_fake_spin_data_for_time is not giving me the correct number of nominal spins because it is marking some as invalid. I did it this way so I can explicitly set the number of valid spins.

boundary_scale_factors = (
pd.read_csv(ancillary_files[bsf_descriptor], header=None, skiprows=1)
.to_numpy(dtype=float)[:, 2:]
.T
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are transposing after the indexing though. Aren't you getting different values now? Before you were getting arr[:, :2], now you are doing arr[:, 2:].T.

xr.Dataset
Dataset with helio index maps.
"""
with sp.KernelPool(kernel_paths):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we were trying to have all Spice functions in the spice module. So if you need to furnish kernels etc. that would all be handled in one common location and not within any indivdiual instruments. I'm not entirely clear why you need this though and why we can't rely on the current furnishing of kernels. Do you want to calculate ancillary files based on simulation spinning or real spin data that we actually get? i.e. I'd assume you want the simulated values during testing but real values when you are running the pipeline. Overall, I'm not entirely following how all of these files are made and whether this is an OK use of the kernel pool or if it should be moved to the spice module or handled entirely differently.

@subagonsouth pinging you as the resident spice expert.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must admit, I don't understand what is going on here. I'm not clear why the helio index maps get made using simulated kernels and a fixed time.

In terms of correctly furnishing kernels so that they are only furnished for the scope of this context manager after which the kernels automatically furnished in the pre-processing step get restored, this looks OK.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I was confused at first as well. These were originally supposed to be calibration files to be used for every pointing that UTLRA gave us but they made one for each energy bin so that would be ~90 calibration files. I thought about making a script that combines them but Nick mentioned it would be good to have SDC code to create them. Greg and I thought it might be worth it to just have the index creation algorithm be part of the pipeline.

I asked about using the non simulated kernels and Nick said that extra care would need to be taken, because the spin duration isn't exactly 15 second so then you're picking a spin within the pointing and rotating the spacecraft based on that. When you go to scale it, you'd have to track that for every pointing

Maybe thats fine? Im also thinking about just making them once and reading them in as ancillary because this might be overkill to recalculate them every time especially since they will only change if the nside changes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I think that I am fine with this approach. Like I said, the kernel handling looks like it is done correctly and shouldn't cause any side-effects.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK great. Im also going to use the start ET value of the pointing instead of the hardcoded value as well. That should have been done this way from the start.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But doesn't that mean you will run into kernel coverage issues with the simulated kernels?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant the start et of the simulated pointing kernel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

3 participants