Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Priors on the normalization of the global and emulator covariance matrix #127

Open
zjzhang42 opened this issue Apr 13, 2019 · 1 comment

Comments

@zjzhang42
Copy link

The normalization of the global covariance is described by a free parameter logAmp (the actual normalization is (10**logAmp)**2). But if such normalization is much higher than the median value of the squared of the flux uncertainties, then the final covariance matrix could be dominated by the global covariance matrix. In this way, when we generate the plots to show the fitting residuals and the 1/2/3-sigma model scatters, even large residuals could be regarded as a good fit, given that they are anyway consistent within 1/2/3 sigma. However, this might not be always appropriate since Starfish could add "artifacts" into the covariance matrix to force the fitting look good. This concern is also what some non-Starfish users expressed to me.

I have to clarify that I am not blaming the spectral emulator matrix and the global covariance matrix. On the other hand, I think they indeed make Starfish as an improved method than the traditional $\chi^{2}$ fitting: the spectral emulator can propagate the interpolation uncertainties which cannot be achieved from linear interpolation; the global covariance structure incorporates the instrumental effect. The inclusion of both spectral emulator and global covariance can help deliver more reasonable error estimates (i.e., small errors in best-fit parameters do not mean good fitting). However, I think it might be necessary to make sure we are not over-adding something into the covariance matrix, which might make Starfish not smart - but "arrogant".

Therefore, I am wondering whether we need to add a prior to the logAmp? An initial proposal would be something like: the 10**logAmp should never exceed a certain fraction (e.g., 0.5) of the median flux uncertainty, although I cannot think of any reasons for deciding a reasonable fraction. And such a fraction seems like another artifact... I naively think the fraction should be smaller than 1 or so to make sure the global covariance does not dominate, but I'd really like to hear your advice!

On the other hand, I know that the global covariance is designed due to the instrumental profile, so I was thinking if I could use the instrument properties to decide such a prior. Actually, the priors on the width of the global covariance kernel, described by the hyper-parameter l, can indeed be decided using the FWHM of the instrumental profile or resolution a.f.o. wavelength (I will recently post my research about this). But I do not know how to determine a reasonable prior on the normalization of the global covariance based on the properties of the instrumental profile (maybe we cannot?).

A similar question related to this would be: is it necessary at all to add any priors to make sure the emulator covariance matrix does not exceed the squared flux uncertainties too much? (I may check the math shown in the paper/code again to better understand this)

I may do some experiments by comparing the fitting results from adding and not adding my proposed normalization-priors on the global covariance and will post some updates when they are ready.

@iancze
Copy link
Collaborator

iancze commented Apr 15, 2019

Although the covariance matrix for the emulator and the covariance both appear in the likelihood calculation, they have different origins and serve different purposes. Once the emulator is tuned to a subset of the synthetic spectral library, its parameters remained fixed, regardless of what specific data spectrum you may be fitting. The uncertainty in the range of emulated spectra is set by the density of gridpoints in your synthetic library, how many principal components you used to decompose it, and how well the tuning process went. The uncertainty from the emulator is marginalized over in the final calculation (Eqn 55).

The covariance matrix for the actual data is parameterized by (among other things) a "global" kernel, which can inflate the correlated noise in a fit. This isn't too different from fitting for a jitter term in an RV timeseries, and is meant to absorb some of the correlated errors that result from fitting a synthetic spectrum with limited parameters available. If you had access to the many tunable parameters that go into synthesizing a single stellar spectrum (T-P profile, correct opacities, microturbulence, limb-darkening etc etc) then you could probably synthesize a truly correct model that would fit within the measurement uncertainties. But, the point is that doing such a calculation over more than a relatively small wavelength region is computationally prohibitive, so most fits that are done will be dominated by (small-scale) systematic uncertainties, not observational uncertainties (e.g., see Section 2.3 of the paper). That said, if the only model that will fit your data requires really huge global covariance values, then that is a sign that your model isn't very good. If the fit isn't a good, fit, then it isn't a good fit, and this should be evident by the mismatch between the model and the data. I'm not sure what you mean by artifacts in this context.

I'm surprised that others would classify this sort of modeling approach as arrogant, because I would think that it's actually much more conservative in its assumptions than say chi^2. By allowing the noise to be parameterized (and inferring those parameters), such an approach opens up the otherwise strict requirements of say chi^2 fitting to investigation. For example, typical chi^2 would assuming that the model you are using is perfect (in the sense that the data could be considered "generated" from the model) and that the only source of noise is Poisson. If you fit for a global covariance parameter and it turns out to be small, then that's a good sign that your model does actually fit the data pretty well (i.e., the noise term is "penalized" in the same sense of adding jitter terms, one can't arbitrarily inflate the noise and get a good fit for all models). If you have a large global covariance parameter, then this is informing you that your range of available models do not accurately represent the data, and that other systematic sources of noise are required. In chi^2 fitting this would be hidden from you.

To your original question, it would be fine to put a prior on the value of logAmp if you so desire. But, the fact that large values seem to be required is probably telling you something about how well the synthetic model grid you are using can reproduce your dataset. It may be a useful exercise to look at the fits in detail to see where there is large data-model mismatch. At the end of the day, the quality of your parameter inferences depend most strongly on how good your models are.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants