-
Notifications
You must be signed in to change notification settings - Fork 22
ULTRA L1C HELIO index maps #2492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
ULTRA L1C HELIO index maps #2492
Conversation
| boundary_scale_factors = ( | ||
| pd.read_csv(ancillary_files[bsf_descriptor], header=None, skiprows=1) | ||
| .to_numpy(dtype=float)[:, 2:] | ||
| .T |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to transpose all these arrays to make them follow the dimension order that everything else follows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are transposing after the indexing though. Aren't you getting different values now? Before you were getting arr[:, :2], now you are doing arr[:, 2:].T.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the code downstream to handle the transposed values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still confused though, you are getting fundamentally different values out of the file itself aren't you? np.arange(4)[:2] != np.arange(4)[2:]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OH! I see. Wow yikes good catch. I need to go understand why the test didnt catch this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I was doing that just for the ra_and_dec variable.
The other variables were [:, 2:] and are still [:, 2:]
It used to be
index_grid = pd.read_csv(
ancillary_files[index_descriptor], header=None, skiprows=1
).to_numpy(dtype=float)
ra_and_dec = index_grid[:, :2]
but now its
index_grid = pd.read_csv(
ancillary_files[index_descriptor], header=None, skiprows=1
).to_numpy(dtype=float).T
ra_and_dec = index_grid[:2, :]
| nspins = 5522 | ||
| nominal_spin_seconds = 15.0 | ||
| spin_data = pd.DataFrame( | ||
| { | ||
| "spin_start_met": np.linspace( | ||
| pointing_range_met[0], pointing_range_met[1], nspins | ||
| ), | ||
| "spin_period_sec": np.full(nspins, nominal_spin_seconds), | ||
| "spin_phase_valid": np.ones(nspins), | ||
| "spin_period_valid": np.ones(nspins), | ||
| } | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a utility function use_fake_spin_data_for_time and generate_fake_spin_data that youmight be able to use here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use_fake_spin_data_for_time is not giving me the correct number of nominal spins because it is marking some as invalid. I did it this way so I can explicitly set the number of valid spins.
| boundary_scale_factors = ( | ||
| pd.read_csv(ancillary_files[bsf_descriptor], header=None, skiprows=1) | ||
| .to_numpy(dtype=float)[:, 2:] | ||
| .T |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are transposing after the indexing though. Aren't you getting different values now? Before you were getting arr[:, :2], now you are doing arr[:, 2:].T.
| xr.Dataset | ||
| Dataset with helio index maps. | ||
| """ | ||
| with sp.KernelPool(kernel_paths): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we were trying to have all Spice functions in the spice module. So if you need to furnish kernels etc. that would all be handled in one common location and not within any indivdiual instruments. I'm not entirely clear why you need this though and why we can't rely on the current furnishing of kernels. Do you want to calculate ancillary files based on simulation spinning or real spin data that we actually get? i.e. I'd assume you want the simulated values during testing but real values when you are running the pipeline. Overall, I'm not entirely following how all of these files are made and whether this is an OK use of the kernel pool or if it should be moved to the spice module or handled entirely differently.
@subagonsouth pinging you as the resident spice expert.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must admit, I don't understand what is going on here. I'm not clear why the helio index maps get made using simulated kernels and a fixed time.
In terms of correctly furnishing kernels so that they are only furnished for the scope of this context manager after which the kernels automatically furnished in the pre-processing step get restored, this looks OK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I was confused at first as well. These were originally supposed to be calibration files to be used for every pointing that UTLRA gave us but they made one for each energy bin so that would be ~90 calibration files. I thought about making a script that combines them but Nick mentioned it would be good to have SDC code to create them. Greg and I thought it might be worth it to just have the index creation algorithm be part of the pipeline.
I asked about using the non simulated kernels and Nick said that extra care would need to be taken, because the spin duration isn't exactly 15 second so then you're picking a spin within the pointing and rotating the spacecraft based on that. When you go to scale it, you'd have to track that for every pointing
Maybe thats fine? Im also thinking about just making them once and reading them in as ancillary because this might be overkill to recalculate them every time especially since they will only change if the nside changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I think that I am fine with this approach. Like I said, the kernel handling looks like it is done correctly and shouldn't cause any side-effects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK great. Im also going to use the start ET value of the pointing instead of the hardcoded value as well. That should have been done this way from the start.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But doesn't that mean you will run into kernel coverage issues with the simulated kernels?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant the start et of the simulated pointing kernel.
Change Summary
Overview
Create ultra helio index maps on the fly to be used in the calculation of the exposure factor, efficiencies and geometric function. This is an alternative to reading in many ancillary files. It was decided that writing the code to do this is best incase we need to change it.
Most of the code to produce the index map was written by Claude after importing the Java code that nick uses. There were a few small issues to fix after this.
Updated Files
-imap_processing/ultra/l1c/l1c_lookup_utils.py
Testing