Skip to content

Releases: leggedrobotics/rsl_rl

v2.1.2

07 Feb 23:24
Compare
Choose a tag to compare

Overview

A patch fix for local installation of the library. Previously, it was missing setup.py or setup.cfg which prohibited it from installation locally in editable mode. We have now added a dummy setup.py to fix this issue.

Full Changelog: v2.1.1...v2.1.2

v2.1.1

07 Feb 22:08
Compare
Choose a tag to compare

Overview

We’re excited to announce that the rsl-rl library is now available on PyPI! You can install it easily with:

pip install rsl-rl-lib

Full Changelog: v2.0.1...v2.1.1

Added

Fixed

  • Saves internal count of EmpiricalNormalization for resuming training by @tasdep in #30
  • Fixes error caused by non UTF-8 characters in git diff by @fan-ziqi in #31

New Contributors

Release v2.0.1

11 Oct 12:53
Compare
Choose a tag to compare

Overview

Full Changelog: v2.0.0...v2.0.1

Fixed

  • Fixed RL device setting in the on policy runner
  • Fixes issue with splitting and padding of trajectories for recurrent network architecture training
  • Updates wandb and neptune logging by @Mayankm96 in #18

Release v2.0.0

01 Nov 23:53
51d06cf
Compare
Choose a tag to compare

This release adds the following new features to the library:

Added

  • Adds empirical normalization for observations and rewards
  • Adds logging to Weights and Biases, Neptune
  • Adds pre-commit formatter

Fixed

  • Fixes issue with splitting and padding of trajectories for recurrent network architecture training

Changed

  • Changes the extras key for storing logs. Earlier it was doing extras["episode"], which is now replaced with extras["log"] to make it more generic.
  • Modified the config structure to have the class names within their respective algorithm and architecture dictionaries.

Pre-Release v1.0.2

20 Oct 15:25
2ad79cf
Compare
Choose a tag to compare
Pre-Release v1.0.2 Pre-release
Pre-release

This version corresponds to the original source code for rsl_rl at the point of publication of "Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning" by Rudin et al.

The release contains an optimized version of PPO implementation suited for use with GPU-accelerated simulators such as Isaac Gym.

This is the version of the code compatible with legged_gym.