Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs and examples on how to report performance issues #936

Open
MischaPanch opened this issue Sep 7, 2023 · 6 comments
Open

Docs and examples on how to report performance issues #936

MischaPanch opened this issue Sep 7, 2023 · 6 comments
Assignees
Labels
documentation enhancement Feature that is not a new algorithm or an algorithm enhancement
Milestone

Comments

@MischaPanch
Copy link
Collaborator

MischaPanch commented Sep 7, 2023

RL is flaky and one needs to report statistically significant results. We should help contributors and users with this. I think using nni for parallel execution is a good fit for that (nni is dead, and we shouldn't rely on external tools unless strictly necessary, especially those needing external config files).

Original motivation:

I run 10 runs for each config and there wasn't much difference. I guess that was an unlucky v.s. lucky seed case

Originally posted by @MischaPanch in #886 (comment)

@MischaPanch MischaPanch added this to the Release 1.0.0 milestone Sep 7, 2023
@MischaPanch MischaPanch added enhancement Feature that is not a new algorithm or an algorithm enhancement documentation labels Sep 7, 2023
@MischaPanch
Copy link
Collaborator Author

Related to #978

@MischaPanch
Copy link
Collaborator Author

@bordeauxred this is essentially the task that you're currently working on, as a first step towards #978. Pls comment here so I can assign you :)

@bordeauxred
Copy link
Contributor

Comment as requested @MischaPanch

@MischaPanch
Copy link
Collaborator Author

@maxhuettenrauch pls also comment so I can assign you

@maxhuettenrauch
Copy link
Collaborator

.

@MischaPanch
Copy link
Collaborator Author

After several discussions, the todos here became clearer:

  • Allow setting policy random seeds and training seeds separately
  • Allow setting training seeds explicitly
  • Prevent accidental overlap between training seeds and test seeds

As the final outcome of this issue, there should be one script that allows training some policy with a fixed configuration on some environment such that after training one can clearly say one of the three:

  1. With at least X_1 many training seeds and X_2 policy seeds, the performance of the policy is good on non-training seeds. The interquantile mean is high and the interquantile variance is small.
  2. With at least X_1 many training seeds and X_2 policy seeds, the performance of the policy is bad (small mean, still small variance)
  3. The variance is so large that the results are inconclusive. It may mean that either the number of training seeds should be changed (we got unlucky a few times but with more training seeds the sample efficiency may improve), or that overall the algorithm-env combination has intrinsically high variance and is prone to something like collapse to bad solutions.

In any case, after running such a script there should be as few questions left regarding randomness and flakyness as possible. The runs should be exactly reproducible. The relevant statistics should be easily visualizable (in particular, interquantile statistics), maybe some plots should be saved as png by default. Additional statistics can be collected with ease.

We can think about an integration with rliable and openrlbenchmark, but it's not a priority and can be done later. Maybe some code from rliable can help us here.

Note that the algorithm configuration should ideally be written in python. We can use systems like hydra or optuna to parallelize runs, but we should not rely on their yaml-based configuration mechanisms.

The configspace project might be a good fit for describing our configuration. Note also my PR there that introduces better typing support for sampling configurations. Until it's merged, we could put the configspace extension into tianshou for the time being.

@maxhuettenrauch @opcode81

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation enhancement Feature that is not a new algorithm or an algorithm enhancement
Development

No branches or pull requests

3 participants