Skip to content
/ pbo Public
forked from jviquerat/pbo

Policy-based optimization : single-step policy gradient seen as an evolution strategy

Notifications You must be signed in to change notification settings

cfl-minds/pbo

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Policy-based optimization

PBO (policy-based optimization) is a degenerate policy gradient algorithm used for black-box optimization. It shares common traits with both DRL (deep reinforcement learning) policy gradient methods, and ES (evolution strategies) techniques. In this repository, we present a parallel PBO algorithm with covariance matrix adaptation, with applications to (i) the minimization of simple analytical functions, and (ii) the optimization of parametric control laws for the chaotic Lorenz attractor. The related pre-print can be found here. This paper formalizes the approach used in previous related works:

  • Direct shape optimization through deep reinforcement learning (paper, pre-print and github repository),
  • Single-step deep reinforcement learning for open-loop control of laminar and turbulent flows (paper and pre-print),
  • Deep reinforcement learning for the control of conjugate heat transfer with application to workpiece cooling (paper and pre-print)

Usage

The environments from the paper are available in the envs/* folder. For each .py environment file, you need a .json parameter file. To run an environment, just use:

python3 start.py envs/my_env.json

Below are some selected visuals of cases presented in the paper.

Parabola function

We consider the minimization on a parabola defined in [-5,5]x[-5,5]. Below is the course of a single run, generation after generation, with a starting point in [2.5,2.5]:

Rosenbrock function

The Rosenbrock function is here defined in [-2,2]x[-2,2]. It contains a very narrow valley, with a minimum in [1,1]. The shape of the valley makes it a hard optimization problem for many algorithms. Here is the course of a single run, generation after generation, with a starting point in [0.0,-1.0]:

Parametric control laws for the chaotic Lorenz attractor

We consider the equations of the Lorenz attractor with a velocity-based control term:

We make use of the following non-linear control with four free parameters:

Two control cases are designed: the first one consists in forcing the system to stay in the x<0 quadrant, while the second one consists in maximizing the number of sign changes (cases inspired from this thesis). Below is a comparison between the two controlled cases.

Parabola function on triangular domain

To test the case of dependant variables, we consider a parabola function on a triangular domain, with x in [0,1] and y in [0,1-x]. The parabola has its minimum in [0.1,0.8], while the starting point is located in [0.2,0.2]:

About

Policy-based optimization : single-step policy gradient seen as an evolution strategy

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 72.4%
  • Gnuplot 27.6%