Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebranding vsini as resolution uncertainty in low resolution spectra #78

Open
gully opened this issue May 10, 2017 · 5 comments
Open

Rebranding vsini as resolution uncertainty in low resolution spectra #78

gully opened this issue May 10, 2017 · 5 comments

Comments

@gully
Copy link
Collaborator

gully commented May 10, 2017

At (sufficiently) high spectral resolution, and with (sufficiently) rapidly rotating stars, the line broadening can be dominated by the (projected) rotational broadening, often parameterized by vsini. At low spectral resolution, or with low vsini sources, the instrumental resolution dominates the spectral broadening.

In the latter case, imperfections in spectral resolution estimates can add to model-data mismatches. Some of this residual structure can be taken up with the global Gaussian Process covariance structure. To first order, a finite vsini approximates additional broadening from under-estimated instrumental resolution. However, the right thing to do would be to rebrand vsini as a small, extra instrumental spectral resolution.

From a practical standpoint, this rebranding would require one small change in the update_theta() step-- modify the rotational broadening kernel to an instrumental resolution taper, mimicking the kernel used in the grid_tools.py load_flux() routine.

I have implemented a demonstration of this rebranding, and found one subtle bug. At some point in the code there is a self.ss[0] = 0.01 hack with the note # junk so we don't get a divide by zero error. My (admittedly limited, but seemingly reasonable) interpretation of this term is that it assigns a finite DC power to avoid numerical artifacts. In practice the relative value may affect your spectra, depending on the absolute mean level of your spectra. Indeed, in an experiment with-and-without this hardcode toggled, I see that the mean level of my model spectrum changes.

I'm not sure this affects many users-- I'm working with a customized and hacky version of Starfish (i.e. my fork's mix_model_omega2 branch... no shame in my bad branch hygiene though). So I wouldn't sound any alarms about this. For what it's worth, I do not get any divide by zero errors, so something seems to be working fine without the hack.

Finally, if this procedure were to be followed, it's conceivable that you could have either positive or negative perturbations to the kernel. Since we are using multiplications in Fourier space, in principle we could divide by a taper to get back high frequency structure, meaning that we would support either over- or under- estimates of the spectral resolution! I haven't really worked this out, but I think it would work. Cool, huh!

@mveyette
Copy link

Taking this a step further, you could also account for how resolution changes across an order. I am not sure the best way to implement this, but it would be nice to be able to marginalize over uncertainty in spectrometer resolution. Often these values are only reported to 1 or 2 sig figs and reported resolutions can be based on overestimated design specs instead of measured performance. The actual resolution can also vary by over 30% across an order (or even more for a prism-only spectrometer).

@gully
Copy link
Collaborator Author

gully commented May 10, 2017

I actually just implemented wavelength-dependent spectral resolution for the SpeX Prism mode, which varies in spectral resolution by a factor of ~2.5. I added a method to the relevant instrumental class that returns R(lambda), then modified the FFT instrumental resolution step to iteratively assign smoothed chunks across the spectrum at different resolutions. This change makes grid.py --create much slower, but you only have to do run it once in the setup of a new spectrum project, and it was relatively fast to begin with.

@mveyette
Copy link

Nice! In theory, you could deconvolve the instrument profile as a function of wavelength (or pixel for echelles) from arc lamp lines for instruments that don't provide resolution curves. Then just apply it when you create the grid like you did. Any deviation from that measured performance (say, from flexure) would likely be too small to warrant including it in the model.

@gully
Copy link
Collaborator Author

gully commented May 30, 2017

Update:

in principle we could divide by a taper to get back high frequency structure, meaning that we would support either over- or under- estimates of the spectral resolution! I haven't really worked this out, but I think it would work. Cool, huh!

I now have this working. You put your best guess for spectral resolution in the Instrument class, and then sample in "epsilon", the perturbation in instrumental resolution, centered on zero, and ranging to positive and negative values:

        #delR is now rebranded as deltaV, the uncertainty in FWHM of velocity resolution
        sigma = p.delR / 2.35 # in km/s
        taper = np.exp(-2 * (np.pi ** 2) * np.sign(sigma)*(sigma ** 2) * (self.ss ** 2))
        FF_tap = FF * taper

Notice the np.sign( ). It seems to work fine. Best to put a reasonable prior on it, perhaps using strategies Mark described above. Again, only minute perturbations are supported since you've already downsampled from the native resolution models at this point.

@iancze
Copy link
Collaborator

iancze commented Jun 1, 2017

I am excited about all of these enhancements for dealing with varying spectral resolution, and like you showed me a few weeks ago, I think it is working very well.

I just wanted to point out the reason for the self.ss[0] = 0.01 term that appears throughout the code, e.g., here, and alleviate your fears that it might be a bug.

These ss terms are eventually used for evaluating the taper kernel here, and they appear as a division. This means that if we included the ss[0] = 0.0 frequency, we would get a divide by zero error here. Making this a "junk frequency" is ok, because we later set the term in the final product to 1 (sb[0] = 1.)

However, we could definitely make it clearer what is going on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants