-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sonora elf-owl as successor to bobcat and cholla #97
Comments
The grids are huge! Probably split up for that reason indeed. I will start with the T-type grid, since that one is probably of most interest to you 😉. These are cloud-free grids though. |
I would also be very interested! Maybe it would be a good occasion to introduce the possibility of having (at least block-wise) a non-constant resolution in the models? Already for the Sonora family, this would make sense: With a constant resolution, the spectrum files either have (too many) points where there are no data originally, which is problematic if the data have high resolution (the user would generally not know that the Delta lamba of the model files in When convolving with a non-constant kernel, one can "stretch the data with the inverse scale (i.e., at places where you would want to the Gaussian width to be 0.5 the base width, stretch the data to 2x)" but I am not sure if this can be helpful here everywhere where you deal with the model data. But maybe block-wise constant would be not too much work? |
The constant lambda/D_lambda is needed in order to enable fast smoothing. For fitting a specific wavelength regime and/or resolution, you can use |
The ideal solution would be if modelers provide grids in constant log wavelength sampling instead of the typical constant wavenumber spacing... |
I agree that |
Yes that could well be! It is important to check if the resolution is sufficient for the data used, see: For example, certain wavelength regimes might be inaccurate for comparing with high-resolution data while still fine at low-resolution or photometry. |
The Sonora Owl Elf models have a fixed lambda/d_lambda so I did not need to resample them. For now, I have only included the T-type grid, but could add the Y and L grids later on if needed. The log(g)=3.0 points were not regularly sampled in the grid, so I simply removed all spectra with log(g)=3.0. I created some plots in the Teff = 800-1000 regime, for which the grid was complete. However, please check when using I will upload the grid later today, which will have the tag |
Hereby an example of the spectra. Some of the changes between grid points are highly non-linear so fitting/interpolating may not always give accurate results. Instead, the |
Wow! Very nice. Thank you! The non-linear dependence is probably even stronger at high resolution. Maybe it would be worth it to try contacting Mukherjee et al. about calculating spectra at intermediate parameter values? |
I was able to download and add the model grid, but initially it would stall and the kernel would die before it finished adding all the grid points to the database. I had to specify a wavelength sampling, e.g.
So there might still be an issue with the native sampling, or of corrupt or missing grid points in the current tgz file. I made sure to monitor the memory of my machine while adding the model to the database and I don't think it was an issue with my machine's memory. After adding the model, the fit worked great! (well, great in that i was able to finish a sampling run and the resulting steps ran normally, none of these models fit my data with reasonable parameters 😭 ). |
Hello @wbalmer, Sorry that no model fits with reasonable parameters… 😢. Could it be linked to the too-coarse spacing in parameters such as logg or [Fe/H] (at least around 900 K; looking at the plots from Tomas above)? If you go away on either side of the best fit by one model point, does it look like the best fit may have been missed? Do you get the same best fit if you downgrade the resolution of your data? About the initial stalling, if it were due to corrupt of missing files, it might say How did you monitor the memory usage, by the way? (I am not sure how one should do it!) Maybe this helps at least one of us… Thanks! Gabriel |
Thanks for testing! Adding the full grid will be too memory intensive for a typical machine I think. The TAR file by itself is 36 GB and it will add an array with all spectra at once to the HDF5 database. On my Macbook, with 16 GB memory, I could add a Teff range of 200 K without resampling. The Sonora Elf Owl grids are cloud-free, so a poor fit probably means that your object is cloudy 😊. |
Nice! Is the irregular logg spacing (Delta logg = 0.2 / 0.2 / 0.3, cycling) a problem? |
No, the grid is just not complete for log(g) = 3 |
Ah! ok, thanks. And likewise, the missing C/O=2.0 point is not a problem (variable Delta C/O), I guess? |
If I may briefly politely re-open this: would it make sense if you provided also a low-resolution resolution of the model grids for when one wants to compute only photometry? Otherwise it is a slow trial-and-error with restricting the range of Teff values or the other parameters, with freezing and crashing when it does not work, and variations from machine to machine. Maybe R = 3000 or 10,000 would be enough for most/all photometry purposes (not sure). |
Easiest would be to use the |
Ah! right. Thanks for pointing that out. I was actually setting both parameters but did not realise that this would prevent all the models being read into memory at once (which makes sense), i.e., that it should be the solution. Then, my problem was that |
It will store all spectra into a single array when adding to the database. That could be changed but it is not something I can easily do. A sampling of 10,000 is quite high. For fitting only (broadband) photometry, I think that ~100 should be sufficient. |
It will store all the downsampled spectra, though, right? Then that would not be an issue for the RAM. For the Sonora Bobcat README, "In our experience, a minimum of 10 wavelength points is necessary to obtain a reasonable average flux over a wavelength interval", so if filters have R ~ 10–100 (broad- to narrow-band photometry), I guess indeed 100–1000 should be enough. |
Yes, exactly, the downsampled spectra, so at low sampling resolution that should probably be fine for most/all grids. |
It is also related to issue #104 but how can one fit e.g. the |
And about (in
Does this mean that we cannot fit objects with a Teff between 1200 and 1300 K 😁? Duplicating big data files is not elegant but it would make it possible to fit to cover all Teffs (and doing fits in two parts). Edit: Actually, to avoid edge effects, I guess one should have an overlap of one full grid bin, i.e., have up to e.g. 1300 K in the T-dwarf grid and down to 1200 K in the L-dwarf grid (assuming Delta Teff = 100 K). |
The sonora team released recently their elf-owl model grid to zenodo (T-dwarfs here, but there are also L-dwarf and Y-dwarf entries that are separate, maybe because of file size?). These are supposedly the successors to the bobcat and cholla grids, and vary all the interesting parameters (abundances and Kzz). It would be very useful to have them in species!
The text was updated successfully, but these errors were encountered: