Skip to content

Latest commit

 

History

History
43 lines (34 loc) · 1.69 KB

README.md

File metadata and controls

43 lines (34 loc) · 1.69 KB

Quadruped Locomotion in Isaac Sim

Quadruped Locomotion learning through Reinforcement Learning and Model Predictive Control

Installation

Isaac-PPO and Isaac-Sim Orbit are needed for this repository. Please follow the installation instructions on both repositories. When installing Orbit, make sure to create a virtual environment called orbit using the instructions provided.

Tasks

Below are a list of tasks that have been tested. They come from the default orbit tasks listed here.

  1. Isaac-Velocity-Flat-Unitree-A1-v0
  2. Isaac-Velocity-Rough-Unitree-A1-v0

Usage

Modes

There are several modes available to run the package listed below:

  1. rl - Runs pure RL PPO on the specified task
  2. mpc - Runs pure MPC control on the specified task
  3. mpc-rl - Runs RL + MPC on the specified task

Note: For rl and rl-mpc modes, the default run trains the models. To play the learnt policy, use the command line argument --play_mode while running.

RL

Training

Example with flat Unitree-A1 environment with video recordings. Note: Hyperparameters can be changed in hyperparameters.py.

python main.py --task Isaac-Velocity-Flat-Unitree-A1-v0 --num_envs 4096 --headless --video --offscreen_render

Playing

Example with flat Unitree-A1 environment. This will automatically load the latest trained model from the logs unless a --model_path is specified.

python main.py --task Isaac-Velocity-Flat-Unitree-A1-v0 --num_envs 10 --play_mode

MPC

TODO

MPC-RL

TODO