Skip to content

Adaptive Reinforcement Learning on CartPole (discrete) and CarRacing (continuous) environments with Noise Strategies

Notifications You must be signed in to change notification settings

Eajunnn/Reinforcement-Learning-OpenAI-gym

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

11 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– Reinforcement-Learning-OpenAI-Gym

Adaptive Reinforcement Learning for CartPole (discrete) and CarRacing (continuous) environments, designed to simulate real-world uncertainties with advanced noise strategies. 🏎️🧠


πŸ“œ Description

This project explores how RL agents adapt to noisy, dynamic environments by introducing a robust noise injection framework and evaluating advanced RL architectures. Key highlights include:

  • πŸŒͺ️ Simulating real-world uncertainties with friction, wind, and Gaussian noise.
  • 🎯 Comparative analysis of noise strategies:
    • Curriculum Learning
    • Annealing
    • Stochastic
    • Dynamic Randomization
  • βš™οΈ Evaluation of advanced RL agents:
    • Dueling DQN
    • Double DQN
    • Noisy Dueling DQN
    • Distributional Dueling DQN

Experiments span across CartPole (discrete control) and CarRacing (continuous control) environments.


CarRacing

trial_600


CartPole

training

✨ Key Features

  • πŸ› οΈ Noise Injection Framework:
    • Models uncertainties like friction, wind, and sensor errors.
  • πŸ”€ Dynamic Noise Strategies:
    • Gradual complexity increase (Curriculum Learning).
    • Randomized disturbances (Stochastic and Dynamic Randomization).
  • πŸ“Š Advanced RL Architectures:
    • Adaptation in high-noise scenarios using robust architectures.
  • 🌍 Transfer Learning:
    • Evaluate transferability from low-noise to high-noise environments.

πŸš€ Installation

  1. Clone the repository:
    git clone https://github.com/Eajunnn/Reinforcement-Learning-OpenAI-gym.git
    
    
  2. Navigate to the project directory:
    cd Reinforcement-Learning-OpenAI-gym

Install required dependencies:

pip install -r requirements.txt

πŸ”§ How to Run

  1. Select the environment:
    • CartPole: cartpole_main.py
    • CarRacing: carracing_main.py
  2. Adjust parameters in the scripts for noise types and strategies.
  3. Run the training:
    python <script_name>.py
    
    

πŸ“Š Results

  • Curriculum Learning emerged as the most effective noise strategy, achieving:
    • πŸ† Highest rewards.
    • ⏩ Fastest convergence.
    • πŸ“ˆ Best stability in high-noise scenarios.
  • Dueling DQN demonstrated exceptional adaptability, outperforming other architectures.

🀝 Contributing

Contributions are welcome! Fork the repository, make your changes, and submit a pull request.


πŸ“œ License

This project is licensed under the MIT License.


⚠️ Disclaimer

This project is for educational and research purposes. Ensure to test thoroughly before deploying in critical applications.

About

Adaptive Reinforcement Learning on CartPole (discrete) and CarRacing (continuous) environments with Noise Strategies

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages