Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible Hierarchical RL PR #132

Open
peasant98 opened this issue Mar 24, 2021 · 1 comment
Open

Possible Hierarchical RL PR #132

peasant98 opened this issue Mar 24, 2021 · 1 comment

Comments

@peasant98
Copy link

Hello, I am a RL researcher, and my team and I have recently implemented HIRO (Data Efficient Hierarchical Reinforcement Learning with Off-Policy Correction) with PFRL. I'm wondering if a PR of an HRL algorithm (which required some large changes) would be encouraged on this platform.

Thanks!

@muupan
Copy link
Member

muupan commented Mar 25, 2021

Hi, the developer team thinks it is possible to merge such a new algorithm PR, and we would really appreciate such a contribution! To see how easy a specific PR could be merged beforehand, can you let us know what your PR would look like, especially in the following aspects?

  • what kind of changes the PR would make e.g.
    • how "large" is it?
    • could it affect other algorithms?
    • could it break backward compatibility of some API?
  • how the implementation has been verified e.g.
    • is there any significant performance gap between the official HIRO implementation and yours?

@prabhatnagarajan prabhatnagarajan changed the title Possible Hiearchical RL PR Possible Hierarchical RL PR Mar 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants