Skip to content

2D Gridworld navigation using RL with Hindsight Experience Replay

Notifications You must be signed in to change notification settings

orrivlin/Navigation-HER

Repository files navigation

Gridworld Navigation - Hindsight Experience Replay

2D navigation using DQN/Actor-Critic and Hindsight Experience Replay

This repository contains a PyTorch implementation of a simple 2D navigation environment, in which an agent needs to traverse a map and arrive at a destination pixel, while circumventing onstacles. Both agent position and goal are given implicitly in the input image. For every step in which the agent has not arrived at the goal, it recieves a -1 reward, which makes the problem difficult. To train the agent, I started by using a standard DQN algorithm coupled with HER (Hindsight Experience Replay), which helps to overcome the sparse rewards. This has only managed to achieve around 80% success rate in arriving at the goal, and training takes quite a few hours. Next, I implemented an actor-critic version of HER, and recently achieved ~90% success rate in getting to the goal pixel. I think that if I used a more sophisticated learning algorithm such as Proximal-Policy-Optimization or Soft-Actor-Critic, I could probably get better results. This was great fun to work on. I also wrote a Medium article on Hindsight-Experience-Replay, feel free to check it out

Learning curve for DQN-HER:

alt text

Learning curve for PG-HER:

alt text

And some examples of trajectories using a trained agent:

alt text alt text alt text alt text alt text alt text

Not so evident in the trajectories shown here, but I noticed the agent tends to exploit the fact that the edges of the map are free by construction, and often maneuvers along the edges even if it's not mandatory.

About

2D Gridworld navigation using RL with Hindsight Experience Replay

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages