Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixing example file + allowing lmax_theory to set ell range #92

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

sgiardie
Copy link
Collaborator

@sgiardie sgiardie commented Sep 4, 2024

I fixed the example file since a likelihood block with mflike.MFLike instead of mflike.TTTEEE does provide enough default settings needed by mflike.

I also allowed lmax_theory to set self.ells[-1]. Before, it was only setting the ell max of the theory Cl, but not self.ells passed to the Foreground class. Now the ell max is set for both to lmax_theory, as asked by @itrharrison for some SOLikeT tests.

It's also not super useful to have
"Cl": {k: max(c, self.lmax_theory + 1) for k, c in self.lcuts.items()}}
(that I fixed to "Cl": {k: self.lmax_theory + 1 for k, _ in self.lcuts.items()}})
If lmax_theory > self.lcuts, then it is like in the new setting, if instead lmax_theory < self.lcuts, we made a mistake in requesting an lmax_theory smaller than some of the ells in default_cuts["scales"].

@codecov-commenter
Copy link

codecov-commenter commented Sep 4, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 85.24%. Comparing base (5afa861) to head (d1dd5a5).

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##           master      #92   +/-   ##
=======================================
  Coverage   85.24%   85.24%           
=======================================
  Files           3        3           
  Lines         454      454           
=======================================
  Hits          387      387           
  Misses         67       67           
Files with missing lines Coverage Δ
mflike/foreground.py 89.24% <ø> (ø)
mflike/mflike.py 82.75% <100.00%> (ø)

@sgiardie
Copy link
Collaborator Author

sgiardie commented Sep 5, 2024

I also expanded a bit/fixed something in the docs. It does not really relate with the other two fixes but they are all minor things that can be merged together

@xgarrido
Copy link
Collaborator

xgarrido commented Sep 5, 2024

If lmax_theory > self.lcuts, then it is like in the new setting, if instead lmax_theory < self.lcuts, we made a mistake in requesting an lmax_theory smaller than some of the ells in default_cuts["scales"].

I agree that lmax_theory < self.lcuts is actually a mistake but your fix (I'm not sure, was anything broken by this ?) will not warn the user about this problem and theoretical Cls will be sharply cutted.

I'm not saying that the previous code is better. As far as I remember, we did it that way due to incompatible choice between the lmax value (around 9000) used when simulating data (data that can be cutted at l=5000 afterward) and the lmax value used when reconstructing parameter with mflike. My point is that when you see that data are cutted at 5000, why do we need theoretical Cl up to 9000 ? So I'm a bit afraid a user will shrink the lmax_theory value to the lmax of data which can lead to parameter estimation bias (at least on simulated data :))

@sgiardie
Copy link
Collaborator Author

sgiardie commented Sep 5, 2024

I agree that lmax_theory < self.lcuts is actually a mistake but your fix (I'm not sure, was anything broken by this ?) will not warn the user about this problem and theoretical Cls will be sharply cutted.

Maybe I was a bit sloppy when explaining, we don't have this problem in the current default but it would be a problem if the user decides to set lmax_theory < self.lcuts. In this case I think the code would just crush since the theory Cl would be computed up to an ell which is not enough for the scale cuts.

I'm not saying that the previous code is better. As far as I remember, we did it that way due to incompatible choice between the lmax value (around 9000) used when simulating data (data that can be cutted at l=5000 afterward) and the lmax value used when reconstructing parameter with mflike. My point is that when you see that data are cutted at 5000, why do we need theoretical Cl up to 9000 ? So I'm a bit afraid a user will shrink the lmax_theory value to the lmax of data which can lead to parameter estimation bias (at least on simulated data :))

I think it is still unlikely the user would change the lmax without a valid reason.
I am thinking that instead of coming back to this:
"Cl": {k: max(c, self.lmax_theory + 1) for k, c in self.lcuts.items()}}
we can keep it like this:
"Cl": {k: self.lmax_theory + 1 for k, c in self.lcuts.items()}} and add a raise LoggedError if self.lmax_theory < self.lcuts.
This way the user cannot force the lmax_theory to be smaller than self.lcuts. And by setting the lmax of Cl just to lmax_theory we would have that both self.ells and Cl are computed up to lmax_theory.

@@ -105,7 +105,7 @@ def initialize(self):
self.log.info("Initialized!")

def get_fg_requirements(self):
return {"ells": self.l_bpws,
return {"ells": self.l_bpws[:self.lmax_theory + 1],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm always a bit scared when cutting numpy array since I never know what the first indice is. Basically if self.l_bpws starts at 2 then I think we have a problem

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, l_bpws should start from 2 if I remember well. So to have it arriving up to lmax_theory we should put [:self.lmax_theory -1]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants