Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mixed double precision for PPO algorithm #155

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

lopatovsky
Copy link
Contributor

@lopatovsky lopatovsky commented Jun 10, 2024

Mixed precision

Motivation:

Inspired by RLGames, we implemented automatic mixed double precision to boost performance of PPO.

Sources:

https://pytorch.org/docs/stable/amp.html

https://pytorch.org/docs/stable/notes/amp_examples.html

Speed eval:

  • Big neural network (units: [2048, 1024, 1024, 512])

  • 10000 steps

  • Running on top of Oige env simulation (constant for each run)

  • Skrl uses single forward pass implementation

Library Mixed-Precision Time (s) slowing factor Base: rlgames, mixed pr. = True
RLGames No 448 1.322x
RLGames Yes 339 1 (base)
SKRL No 475 1.401x
SKRL Yes 373 1.1x
SKRL Yes * 358 1.056x

* in this run mixed precision was used also for inference during data collection phase

Quality eval:

  • We trained a policy for our task with each of the configurations multiple times. We didn’t observe any statistically significant difference in quality of the final results.

@lopatovsky lopatovsky changed the base branch from main to develop July 15, 2024 11:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant