Skip to content
Charlotte Mason edited this page Mar 5, 2015 · 41 revisions

Welcome to the Pangloss Wiki!

The goal of the Pangloss project is to model and understand massive structures in the Universe, given photometric and gravitational lensing information about galaxies in wide field surveys. The original motivation was to be able to account for objects along the line of sight to various interesting targets, whose apparent position, brightness and shape are all affected by the combined gravitational lensing effect of all that mass between them and us. In particular, we want to make accurate measurements of distances (cosmography), high redshift luminosity functions, and cluster masses, all of which count line of sight structure as one of their most serious systematic errors. More generally, we are interested in making maps for their own sake: the connection between galaxies and their dark matter halos is of fundamental importance for understanding galaxy formation and evolution.

Pangloss is scientific code, still under development, for holistically modeling mass in the Universe on galaxy and cluster scales. This wiki tries to show what we are doing with it; if you want to play around with the code yourself, please contact Phil Marshall to let us know you are here, and then see the README for more advice.

Code description

The code is documented in-place, but hopefully this page will help you navigate it.

Projects

  • "Reconstructing Every Galaxy - Is It Worth It, for Lens Cosmography?" Collett, Marshall et al. This project grew in to our first paper, "Reconstructing the lensing mass in the universe from photometric catalogue data", Collett et al (2013).

  • "Inferring the Mass Map of the Observable Universe from 10 Billion Galaxies" Marshall, Wechsler, Becker, Skillman, Collett et al, in progress. This is a big one: calibrating to weak lensing data, rather than simulations. One issue is the huge dimensionality integral involved when inferring the hyperparameters of the halo-based model; another is the need to include group and cluster mass distributions. This was the subject of a 2014 Stanford Data Science Initiative proposal, PI'd by Wechsler - dive in to read what we wrote.

  • "Calculating Weak Magnification PDFs for Survey Fields from Simulation Data" Mason, Collett et al. Calculating magnification pdfs from Millennium Simulation data - for sources at z8! Then matching to survey data fields (in this case BoRG) via overdensity. Used in _"Correcting the z8 Galaxy Luminosity Function for Gravitational Lensing Magnification Bias"_, Mason et al (2015).

Plans

  • "Application to the COSMOGRAIL Lens Sample", Collett, Marshall, Suyu, et al (H0LiCoW). We'll need some way of estimating the systematics introduced by the calibration step...

  • "Including Groups and Clusters - Can we avoid the calibration step?" Marshall and anyone else. Should be a straightforward extension to include a suitable group catalog. It is to be seen how good such catalogs are though. Can we include voids? Work with the simulations in 3D? This project might get absorbed into the giant hierarchical inference one listed above.

Upgrade to-do list

  • Improve ray-tracing scheme beyond the current simple summation of lens planes. Keeton et al weights? Blandford propagation matrices? These will need testing, with suitable simulations...
  • Keep track of shear as well as convergence: this is something the lens model can help constrain, after all. Shear calculation is already included - it's just not yet used in the calibration step (since everything goes through kappah_median).
  • Include groups and clusters: optical cluster finders exist, and the latest ones even attempt a probabilistic assessment of cluster membership. The outputs from these codes will fit very nicely into our probabilistic framework.
  • Move to 3D modeling of the smooth component: We tried to make this work, but couldn't quite. It deserves a bit mores tudy...
  • Include voids, filaments and sub-galactic halos. This is probably three upgrades - but since we don't know which is most important yet, it seems sensible to keep these together...
  • Fit to weak lensing data: thus freeing ourselves from the simulations (except via the hyper-priors, a safer place to make assumptions).