-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using flux-calibrated models instead of normalized models. #38
Comments
OK, I fleshed out some of the details here. We definitely don't want to use the non-normalized (i.e. raw) model grids in the spectral emulation process, for the reasons I speculated about above. The right thing to do is probably to construct a scalar flux ratio correction factor as a function of the model grid labels (Teff, log g, [Fe/H]). We can do this in the following way: compute the "grid mean flux" for each model spectrum, for each m spectral orders. Then perform regression (e.g. Gaussian Process regression) to the absolute flux as a function of the stellar parameters. The GP regression strategy would be similar to the Spectral Emulator portion of the code, but since the flux mean is just a single scalar, we don't need dimensionality reduction (PCA), making the regression fairly straightforward, I think. The implementation would be as follows:
I'd have to think a bit more about the error propagation, but I think it's straight-forward-- it's just a scalar multiple. |
You're right about the need to flux-standardize (not continuum-normalize) the input grid, this is to remove the largest source of variance in the models so that the emulation is more accurate. Since there are many grids that (helpfully) provide spectra with real units, like PHOENIX, it would be very nice to propagate this information so that it may be used in mixture model applications. I think your suggestion of keeping a separate flux coefficient sounds reasonable. My naive guess is that this function might be smooth enough (in |
Ok, we figured this out after some deliberation. Note the portion of the code in #If we want to normalize the spectra, we must do it now since later we won't have the full EM range
if self.norm:
f *= 1e-8 #convert from erg/cm^2/s/cm to erg/cm^2/s/A
F_bol = trapz(f, self.wl_full)
f = f * (C.F_sun / F_bol) #bolometric luminosity is always 1 L_sun There's no need to normalize the spectra in the last line above. From the Husser et al. 2013 paper:
on the stellar surface means they've integrated over a stellar disk filling half the sky, in other words, they've integrated over solid angle: Which yields a factor of So what we should do for the code is:
However, the flux standardized spectra in the emulator step should not be removed, because this standardization serves a separate purpose. I will post about this later in more detail. |
Short update on the flux standardization. I'm tracking scalar values in front of the emulated spectra so that we can maintain absolute fluxes. In # Normalize all of the fluxes to an average value of 1
# In order to remove uninteresting correlations
flux_scalars = np.average(fluxes, axis=1)[np.newaxis].T
fluxes = fluxes/flux_scalars I then add instances of |
Demo of (Don't read too much into these values-- they have been normalized with the deprecated system discussed above). The flux scalars should (generally) increase with Teff in a revised system. |
Here is a commit that implements the flux scalars approach in This code breaks backwards compatibility, since it inserts a new positional argument that is not expected in the existing methods. This commit is my current work-around, and by no means a recommendation for the path forward, but perhaps a good starting point. |
The problem: If we implement the mixture model feature, we will probably need to use flux-calibrated model spectra; in other words, the non-normalized spectra natively produced by the model grid (i.e. Phoenix provides real flux units for its models). The reason for needing to use flux-calibrated models should be clear- the relative flux of the mixture model components should scale with different guesses for the effective temperature. There are a few problems that could arise. First, we'll need to toggle on-and-off the normalization whether you're in mixture model or not (that's easy enough). Second, the PCA procedure in the spectral emulation step might acquire more--or at least different--eigenspectra, since the dominant variance will now come from the absolute flux level and not the spectral lines. (I haven't fully fleshed this out, but I suspect the default of applying normalization is there for a reason.) Lastly, the logOmega term might take on a different meaning, or at least different absolute values.
Suggested solution
Just experiment with it and see how it performs. :)
The text was updated successfully, but these errors were encountered: