Skip to content

Commit

Permalink
Merge branch 'nuclear-multimessenger-astronomy:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
AnnaPuecher authored May 13, 2024
2 parents 4c20932 + 535461f commit ea32dd7
Show file tree
Hide file tree
Showing 53 changed files with 11,523 additions and 122 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build-deploy-container.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ jobs:

- name: Build and push
id: build
uses: docker/build-push-action@v4
uses: docker/build-push-action@v5
with:
context: .
file: ./api/Dockerfile
Expand Down
22 changes: 22 additions & 0 deletions .github/workflows/rst-linter.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: Lint rst

on:
push:
paths: ['**/**.rst']
pull_request:
paths: ['**/**.rst']

jobs:
doctor-rst:
name: DOCtor-RST
runs-on: ubuntu-latest
steps:
- name: "Checkout code"
uses: actions/checkout@v4

- name: DOCtor-RST
uses: docker://oskarstark/doctor-rst
with:
args: --short --error-format=github
env:
DOCS_DIR: 'doc/'
9 changes: 8 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,17 +14,24 @@
</div>


[![GitHub Repo stars](https://img.shields.io/github/stars/nuclear-multimessenger-astronomy/nmma?style=flat)](https://github.com/nuclear-multimessenger-astronomy/nmma/stargazers)
[![GitHub forks](https://img.shields.io/github/forks/nuclear-multimessenger-astronomy/nmma?style=flat&color=%2365c563)](https://github.com/nuclear-multimessenger-astronomy/nmma/forks)
[![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/nmma?label=conda%20downloads)](https://anaconda.org/conda-forge/nmma)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/nmma?label=PyPI%20downloads)](https://badge.fury.io/py/nmma)
[![Coverage Status](https://coveralls.io/repos/github/nuclear-multimessenger-astronomy/nmma/badge.svg?branch=main)](https://coveralls.io/github/nuclear-multimessenger-astronomy/nmma?branch=main)
[![CI](https://github.com/nuclear-multimessenger-astronomy/nmma/actions/workflows/continous_integration.yml/badge.svg)](https://github.com/nuclear-multimessenger-astronomy/nmma/actions/workflows/continous_integration.yml)
[![PyPI version](https://badge.fury.io/py/nmma.svg)](https://badge.fury.io/py/nmma)
[![Python version](https://img.shields.io/pypi/pyversions/nmma.svg)](https://badge.fury.io/py/nmma)




Citations to the NMMA code: [Citation record](https://inspirehep.net/literature?sort=mostrecent&size=250&page=1&q=refersto%3Arecid%3A2083145&ui-citation-summary=true)

Read our official documentation: [NMMA Documentation](https://nuclear-multimessenger-astronomy.github.io/nmma/)

Check out our contribution guide: [For contributors](https://nuclear-multimessenger-astronomy.github.io/nmma/contributing.html)


A tutorial on how to produce simulations of lightcurves is given here [tutorial-lightcurve_simulation.ipynb](https://github.com/nuclear-multimessenger-astronomy/nmma/blob/main/tutorials/tutorial-lightcurve_simulation.ipynb)


Expand Down
1 change: 1 addition & 0 deletions api/environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ dependencies:
- python=3.9.*
- numpy
- mpi4py
- numcodecs

- pip:
- -r requirements.txt
5 changes: 5 additions & 0 deletions doc/.doctor-rst.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
rules:
blank_line_after_directive: ~
exclude_rule_for_file:
- path: changelog.rst
rule_name: blank_line_after_directive
21 changes: 21 additions & 0 deletions doc/fitting.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,27 @@ Given a light curve from an optical survey telescope (and potential follow-up),

In many cases, the lightcurve predicted by each set of parameters is **extremely high-dimensional**, given the number of measurements made. Our goal for this example is to to determine the best-fit model parameters for an object based on its observed lightcurve.

### Filters
Often you will see that lightcurve may contain data from SDSS, for example `sdssr`, `sdssg`, `sdssr`, etc. Since the current SVD models are only trained for `sdssu` filter, it is advised that the following filter names should be changed to that of Pan-STARRS1 in the lightcurve data file.

:::{table}
:width: 50%
:align: center

| SDSS | Pan-STARRS1 |
|:-----:|:-----------:|
| sdssg | ps1__g |
| sdssr | ps1__r |
| sdssi | ps1__i |
| sdssz | ps1__z |
:::

And corresponding to this, the filter flag should be `--filter ps1__g,ps1__r,ps__i,ps1__z`

:::{note}
The same is applicable for LSST filters.
:::

### Example fit to simulated data

Following the quick start, we assume that an injection file has been made generated and made available. For example, there are a number of extra parameters available to modify the light curve sampling, including:
Expand Down
34 changes: 34 additions & 0 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,7 @@ requirements.txt file which are necessary for NMMA:
If everything has gone smoothly, all of these above mentioned "pip install something" commands will show that the requirements have already been satisfied. Otherwise, these will cover the dependencies if not covered by ``pip install .``. Also, if running ``pip install .`` shows something on the lines of "cannot cythonize without cython", do:

.. code::
conda install -c anaconda cython==0.29.24
pip install
Expand Down Expand Up @@ -409,6 +410,39 @@ and follow the instructions above.
module load gcc/9.2.0
module load openmpi/4.1.1
Internetless Clusters
^^^^^^^^^^^^^^^^^^^^^

Some cluster resources may not have access to the internet or could block some forms of data transfer. This will cause some difficulties with certain packages and will require some fixes to how NMMA interacts with them and changes to how you call some commands in NMMA. This can also affect how you install NMMA on the cluster. Because the cluster is in some way restricted, it may be that the `conda install nmma -c conda-forge` or the `pip install nmma` commands don't work. Thus to install NMMA, you will need to install from the source. To begin, you will want to copy the NMMA repository from your local to the cluster using an `scp` or `rsync` commmand because likely the git commands will fail. Once NMMA is downloaded to the cluster, navigate to the main directory and run

.. code::
pip install -r requirements.txt
pip install .
and check that the installation was successful by running the first check for NMMA given above. After installation, the next most likely difference is how NMMA uses the trained kilonova models during any form of EM analysis. When running any analysis that requires the use of the trained grid of models you will need to add the `--local-only` flag to the analysis command. For example,

.. code::
lightcurve-analysis \
--model LANLTP2 \
--svd-path svdmodels/ \
--filters ztfg,ztfi,ztfr \
--ztf-sampling \
--ztf-uncertainties \
--ztf-ToO 180 \
--local-only \
--interpolation-type tensorflow \
--outdir outdir/TP2_ztf \
--label TP2_ztf \
--prior priors/LANL2022.prior \
--tmin 0. \
--tmax 14 \
--dt 0.1 \
--error-budget 1 \
--nlive 1024 \
Matplotlib fonts
^^^^^^^^^^^^^^^^

Expand Down
29 changes: 23 additions & 6 deletions doc/joint_inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ A joint inference on gravitational-wave and electromagnetic signals requires NMM

In order to run a multi-messenger inference, we need to follow to main steps:

nmma_generation config.ini
nmma-generation config.ini

Perform the analysis or parameter estimation using:

nmma_analysis --data-dump <name_of_analysis>_data_dump.pickle
nmma-analysis --data-dump <name_of_analysis>_data_dump.pickle

First of all, we set up the `config.ini` file and provide all required data and information.

Expand Down Expand Up @@ -120,13 +120,13 @@ In order to prepare the joint inference, a `config.ini` file is required which s

The joint inference generation can be performed by running:

nmma_gw_generation config.ini
nmma-generation config.ini

This will generate a `GW170817-AT2017gfo-GRB170817A_data_dump.pickle` file under `outdir/data/` which need to be provided for the joint inference function `nmma_analysis`.
This will generate a `GW170817-AT2017gfo-GRB170817A_data_dump.pickle` file under `outdir/data/` which need to be provided for the joint inference function `nmma-analysis`.

**Running the analysis**

As detailed above, running the analysis with the command `nmma_analysis --data-dump outidr/data/GW170817-AT2017gfo-GRB170817A_data_dump.pickle` requires computational resources on a larger cluster. Below we show an example script for job submission called `jointinf.pbs` on a German cluster:
As detailed above, running the analysis with the command `nmma-analysis --data-dump outidr/data/GW170817-AT2017gfo-GRB170817A_data_dump.pickle` requires computational resources on a larger cluster. Below we show an example script for job submission called `jointinf.pbs` on a German cluster:

#!/bin/bash
#PBS -N <name of simulation>
Expand All @@ -146,6 +146,23 @@ As detailed above, running the analysis with the command `nmma_analysis --data-d
export MPI_LAUNCH_TIMEOUT=240

cd $PBS_O_WORKDIR
mpirun -np 512 omplace -c 0-127:st=4 nmma_analysis --data-dump <absolute path to folder>/outdir/data/GW170817-AT2017gfo-GRB170817A_data_dump.pickle --nlive 1024 --nact 10 --maxmcmc 10000 --sampling-seed 20210213 --no-plot --outdir <absolute path to outdir/result folder>
mpirun -np 512 omplace -c 0-127:st=4 nmma-analysis --data-dump <absolute path to folder>/outdir/data/GW170817-AT2017gfo-GRB170817A_data_dump.pickle --nlive 1024 --nact 10 --maxmcmc 10000 --sampling-seed 20210213 --no-plot --outdir <absolute path to outdir/result folder>

Note that settings might differ from cluster to cluster and also the installation of NMMA might be changed (conda vs. python installation).


**Maximum mass constraint from a joint analysis**

From a joint posterior of GW and lightcurve data from a BNS, one can derive an upper limit on the TOV mass, if one assumes that the remnant collapsed to a black hole. The idea is to determine the posterior distribution on the remnant's mass from the posterior distribution of the individual neutron star masses $m_1$, $m_2$ and the ejecta and compare this to the TOV mass of EOSs.

This can be done via the command

maximum-mass-constraint --outdir <path to folder> --joint-posterior <path to the file with samples from GW+EM analysis> --prior <path to a bilby prior file> --eos-path-macro <path to macroscopic EOS> --eos-path-micro <path to microscopic EOS> [--use-M-Kepler]

The last flag determines whether the remnant mass is compared against the TOV mass or the maximum mass limit for a rotating NS (Kepler limit). The latter is less conservative. The joint posterior should contain the parameters chirp mass, eta_star, log10_mdisk, log10_mej_dyn as named columns. Here, eta_star is $η* = \ln(0.25-η)$ from the symmetric mass ratio $η$. The macroscopic EOS curves must have the central pressure p0 in MeV/fm³ of each NS mass as last column.

If --use-M-Kepler is set, the prior file needs to contain two additional fiducial paramters for the quasi-universal relations:

ratio_R = Gaussian(name = "R", mu = 1.255, sigma = 0.024)
delta = Uniform(name="delta", minimum = -0.0125, maximum = 0.0125)

2 changes: 1 addition & 1 deletion doc/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ The original light curves are available on GitHub [here](https://github.com/mbul

#### Wollaeger et al. 2021

We use a model from [Wollaeger et al. 2021](https://arxiv.org/abs/2105.11543) spanning the plausible binary neutron star parameter space. The model is named LANL2022 in the code, and there are a number of geometries to choose from (see [here](https://github.com/nuclear-multimessenger-astronomy/nmma/blob/main/nmma/em/model.py#L59) for LANL list). The model parameters are:
We use a model from [Wollaeger et al. 2021](https://arxiv.org/abs/2105.11543) spanning the plausible binary neutron star parameter space. The models are named LANLTS1, LANLTS2, LANLTP1, and LANLTP2 in the code, representing the different geometries to choose from (see [here](https://github.com/nuclear-multimessenger-astronomy/nmma/blob/main/nmma/em/model.py#L72) for LANL list). The model parameters are:

* dynamical ejecta mass $M^{\rm dyn}_{\rm ej}$
* dynamical ejecta velocity $v^{\rm dyn}_{\rm ej}$
Expand Down
6 changes: 6 additions & 0 deletions doc/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,12 @@ For the HDF5 file format, the `resample-grid` script enables downsampling and fr

resample-grid --gridpath nmma/tests/data/lowmass_collapsar_updated.h5 --factor 5 --do-downsample

### Evaluating training results

While the neural network training output includes diagnostic plots like loss functions, we recommend using `svdmodel-benchmark` to perform a more thorough evaluation. This code uses the trained NN to re-generate each light curve in the grid used for training and computes a reduced chi-squared value between the two. The distribution of these reduced chi-squared values are plotted filter-by-filter and saved in a json file.

The `plot-svdmodel-benchmarks` script reads the json file associated with each model in a directory and creates a single bar plot per model showing the 25th, 50th and 75th percentiles of the reduced chi-squared distributions for each filter. The plots also list the maximum chi-squared value across all filters. This output provides a useful summary of training for each model, and these plots are included on the gitlab repo that contains the latest trained models.

### Spectral grids

Often, the simulations come on spectral grids rather than as light curves, and it could be useful to create surrogate models for these spectral grids instead. In this case, the software expects files of the form:
Expand Down
26 changes: 17 additions & 9 deletions nmma/em/analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -386,6 +386,21 @@ def get_parser(**kwargs):
help="Number of cos-iota nodes used in the Bayestar fits (default: 10)",
default=10,
)
parser.add_argument(
"--ra",
type=float,
help="Right ascension of the sky location; to be used together with fits file"
)
parser.add_argument(
"--dec",
type=float,
help="Declination of the sky location; to be used together with fits file"
)
parser.add_argument(
"--dL",
type=float,
help="Distance of the location; to be used together with fits file"
)

parser.add_argument(
"--skip-sampling",
Expand Down Expand Up @@ -565,15 +580,8 @@ def analysis(args):
lc.to_csv(args.injection_outfile)

else:
# load the kilonova afterglow data
try:
data = loadEvent(args.data)

except ValueError:
with open(args.data) as f:
data = json.load(f)
for key in data.keys():
data[key] = np.array(data[key])
# load the lightcurve data
data = loadEvent(args.data)

if args.trigger_time is None:
# load the minimum time as trigger time
Expand Down
13 changes: 7 additions & 6 deletions nmma/em/gwem_resampling_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,17 +177,18 @@ def __init__(self, GWsamples, EMsamples, GWprior, EMprior, Neos, EOSpath, withNS
z = luminosity_distance_to_redshift(self.GWsamples.luminosity_distance.to_numpy())
mc = self.GWsamples.chirp_mass.to_numpy() / (1 + z)
q = self.GWsamples.mass_ratio.to_numpy()
eta = q/(1+q)**2
EOS = self.GWsamples.EOS.to_numpy()

if (withNSBH):
chi_1 = self.GWsamples.chi_1.to_numpy()
chi_2 = self.GWsamples.chi_2.to_numpy()
chi_eff = (chi_1 + q*chi_2)/(1+q)
self.chi_1KDE = scipy.stats.gaussian_kde(chi_1)
self.chi_2KDE = scipy.stats.gaussian_kde(chi_2)
self.priorKDE((chi_1, chi_2, mc, eta))

else:
self.priorKDE = scipy.stats.gaussian_kde((mc, eta))

self.mcKDE = scipy.stats.gaussian_kde(mc)
self.invqKDE = scipy.stats.gaussian_kde(1. / q)
self.EOSsamples = EOS.astype(int) + 1
self.EMKDE = construct_EM_KDE(self.EMsamples)

Expand Down Expand Up @@ -218,7 +219,7 @@ def LogLikelihood(self, x):
return np.nan_to_num(-np.inf)

if self.withNSBH:
logprior = self.chi_1KDE.logpdf(chi_1) + self.chi_2KDE.logpdf(chi_2) + self.mcKDE.logpdf(mc) + self.invqKDE.logpdf(m1 / m2) + np.log(len(np.where(self.EOSsamples == EOS)[0]))
logprior = self.priorKDE.logpdf(chi_1, chi_2, mc, eta) + np.log(len(np.where(self.EOSsamples == EOS)[0]))
try:
r2 = self.EOS_radius_interp_dict[EOS](m2)
except ValueError:
Expand All @@ -232,7 +233,7 @@ def LogLikelihood(self, x):
loglikelihood = self.EMKDE.logpdf(total_ejecta_mass)

else:
logprior = self.mcKDE.logpdf(mc) + self.invqKDE.logpdf(m1 / m2) + np.log(len(np.where(self.EOSsamples == EOS)[0]))
logprior = self.priorKDE.logpdf(mc, eta) + np.log(len(np.where(self.EOSsamples == EOS)[0]))

try:
r1 = self.EOS_radius_interp_dict[EOS](m1)
Expand Down
60 changes: 38 additions & 22 deletions nmma/em/io.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import json
import astropy
from astropy.time import Time
import h5py
Expand All @@ -9,28 +10,43 @@


def loadEvent(filename):
lines = [line.rstrip("\n") for line in open(filename)]
lines = filter(None, lines)

sncosmo_filts = [val["name"] for val in _BANDPASSES.get_loaders_metadata()]
sncosmo_maps = {name: name.replace(":", "_") for name in sncosmo_filts}

data = {}
for line in lines:
lineSplit = line.split(" ")
lineSplit = list(filter(None, lineSplit))
mjd = Time(lineSplit[0], format="isot").mjd
filt = lineSplit[1]

if filt in sncosmo_maps:
filt = sncosmo_maps[filt]

mag = float(lineSplit[2])
dmag = float(lineSplit[3])

if filt not in data:
data[filt] = np.empty((0, 3), float)
data[filt] = np.append(data[filt], np.array([[mjd, mag, dmag]]), axis=0)
"""
Reads in lightcurve data from a file and returns data in a dictionary format.
Args:
- filename (str): Path to lightcurve file
Returns:
- data (dict): Dictionary containing the lightcurve data from the file. The keys are generally 't' and each of the filters in the file as well as their accompanying error values.
"""
if filename.endswith(".json"):
with open(filename) as f:
data = json.load(f)
for key in data.keys():
data[key] = np.array(data[key])
else:
lines = [line.rstrip("\n") for line in open(filename)]
lines = filter(None, lines)

sncosmo_filts = [val["name"] for val in _BANDPASSES.get_loaders_metadata()]
sncosmo_maps = {name: name.replace(":", "_") for name in sncosmo_filts}

data = {}
for line in lines:
lineSplit = line.split(" ")
lineSplit = list(filter(None, lineSplit))
mjd = Time(lineSplit[0], format="isot").mjd
filt = lineSplit[1]

if filt in sncosmo_maps:
filt = sncosmo_maps[filt]

mag = float(lineSplit[2])
dmag = float(lineSplit[3])

if filt not in data:
data[filt] = np.empty((0, 3), float)
data[filt] = np.append(data[filt], np.array([[mjd, mag, dmag]]), axis=0)

return data

Expand Down
Loading

0 comments on commit ea32dd7

Please sign in to comment.