Skip to content

Latest commit

 

History

History
47 lines (34 loc) · 4 KB

README.md

File metadata and controls

47 lines (34 loc) · 4 KB

SmartSquareRL

The project is created for an Engineering Thesis.

Table of contents

General

The repository will contain the game and a Reinforcement Learning algorithm that will learn to play the game. Currently, the game is finished. The base version of the game can be found here.

The game was written in C++ to be as optimized as possible. The RL algorithm will be written in Python. The gRPC library and Protobuffer will probably be used to communicate between the game and the AI.

Gameplay

Here you can check how game looks like.

Technologies/libraries used:

  • C++ 17
  • SFML Library 2.5.1
  • Python 3.10
  • Pillow Library
  • gRPC 1.46.3
  • Protobuf 3.19.4.0

How to run

  1. Install SFML library: sudo apt-get install libsfml-dev. If you use server without GPU for training, you can use SFML-pi version, which doesn't required X11 display
  2. Install gRPC for C++, follow THIS steps.
  3. Provide correct path to gRPC in cMake file.
  4. Compile C++ project.
  5. Run servers and clients by following scripts (it is necessary to fix paths):
  • Servers <- provide WORKER_IDs and ports you want to you
  • Clients <- provide ports you want to use
  • To easily kill server processes, use killServers.sh.
  1. To test created models, use TestNeuralNetwork.

To configure Neural Network or hyperparameters, modify DDQN file or provide parameters into learning_parameters.

To calculate: usage of ReplayBuffers or epsilon parameter decay use Calculations file.

To generate random maps, use scripts created in Maps directory. Remember to fix MAP_SIZE variables in Game and GameDataHandling if needed. Also fix map limit in Game file. Provide correct path if you change map size and fix loop parameters for number of maps you want to use in Level file.

All trained models will be saved inside LearningData in NeuralNetworks directory which is created when the network is saved. All logs collected during trainings will also be saved in LearningData.

If you switch between MLP and CNN models, switch everything which is needed in following files: DDQN, GameDataHandling, TestNeuralNetwork.