You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Faster code-loading Julia 1.6 alone is at least ~3x faster for using CMBLensing. Will that be enough to make it tolerable? Maybe. Could also try some fancier stuff with SnoopCompile.jl, also in conjunction with (6).
Make autodiff less brittle. E.g. this fails,
f =FlatMap(rand(4,4))
gradient(f ->sum(sin.(f)), f)
even though it works for Arrays
f =rand(4,4)
gradient(f ->sum(sin.(f)), f)
This often leads to gradients w.r.t. θ not working unless you are very careful about how you write the posterior.
Would also be nice to be able to get Hessians w.r.t. θ.
Probably want to wait until Diffractor.jl is out and switch to that simultaneously?
Improve interface for custom posteriors. Custom DataSets are ok (might still think about using Classes.jl), lnP is decent too, but doing MAP_joint, MAP_marg, or sample_joint with your custom posterior pretty much involves copy/pasting the entire existing function and specializing it.
Less GPU memory errors for 3G-sized patches. 3G-sized 1024×2048 analysis can still be finnicky on GPU due to memory errors.
Don't store metadata (θpix, Nside, etc...) in type information. Probably cleaner, easier to add more stuff, and won't lead to having to wait on precompilation again if all you did was switch θpix, etc...
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Some things to work on:
Faster code-loading Julia 1.6 alone is at least ~3x faster for
using CMBLensing
. Will that be enough to make it tolerable? Maybe. Could also try some fancier stuff with SnoopCompile.jl, also in conjunction with (6).Make autodiff less brittle. E.g. this fails,
even though it works for Arrays
This often leads to gradients w.r.t.
θ
not working unless you are very careful about how you write the posterior.Would also be nice to be able to get Hessians w.r.t.
θ
.Probably want to wait until Diffractor.jl is out and switch to that simultaneously?
Improve interface for custom posteriors. Custom
DataSet
s are ok (might still think about using Classes.jl),lnP
is decent too, but doingMAP_joint
,MAP_marg
, orsample_joint
with your custom posterior pretty much involves copy/pasting the entire existing function and specializing it.Maybe hook into an interface like LogDensityProblems.jl?
True curved sky support. 'Nuff said.
Less GPU memory errors for 3G-sized patches. 3G-sized 1024×2048 analysis can still be finnicky on GPU due to memory errors.
Don't store metadata (θpix, Nside, etc...) in type information. Probably cleaner, easier to add more stuff, and won't lead to having to wait on precompilation again if all you did was switch θpix, etc...
Use LinearMaps instead of janky LazyBinaryOp. See also Jutho/LinearMaps.jl#118
Faster Wiener filtering. Bigger analyses are dominated by this, so we need to improve it. Some ideas:
More sophisticated HMC. Maybe use a real HMC library that actually does proper NUTS?
Beta Was this translation helpful? Give feedback.
All reactions