Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expand AbstractDynodeRunner.save_inference_timelines()? #311

Open
kokbent opened this issue Jan 2, 2025 · 0 comments · May be fixed by #316
Open

Expand AbstractDynodeRunner.save_inference_timelines()? #311

kokbent opened this issue Jan 2, 2025 · 0 comments · May be fixed by #316
Assignees

Comments

@kokbent
Copy link
Collaborator

kokbent commented Jan 2, 2025

Currently the AbstractDynodeRunner.save_inference_timelines() is designed to randomly select a few posterior particles, run them, and extract the "timeline" (predicted hospitalization, strain proportion etc.) It was created with fitting in mind, where you get thousands of posterior samples and you just want to run a subset as representative of the overall fit.

However, this function turns out to be useful for just this context. Currrently, it's been used for example, to also run projections of a number of particles. Basically, if you give a set parameters, this function can produce the simulation trajectory.

Some thoughts about expanding (or refactoring?) this function:

  1. Make random selection outside of the function: Since the random selection is built into the function, this means I cannot run specific particle within the dictionary. Say all I care is the output of 0_3, then this function is not going to be useful. Sure I can do a hack by making a dictionary of 1 x 1 list but it can be clunky. Ideally, we should let user choose their particle to run, e.g., particle_to_run = [1, 4, 8, 13, 42]. And if they wanted it random, they can write a random sample to do so. (Although this does make it not backward compatible...)
  2. Potential to support multiprocessing: we should be able to leverage the multiple cores, in the case of running say tens or hundreds of particles. It may potentially fail in Azure... but should work well locally 🤔
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants