Code samples for Deep Reinforcement Learning Hands-On book
- Chapter 2: OpenAI Gym
- Chapter 3: Deep Learning with PyTorch
- Chapter 4: Cross Entropy method
- Chapter 5: Tabular learning and the Bellman equation
- Chapter 6: Deep Q-Networks
- Chapter 7: DQN extensions
- Chapter 8: Stocks trading using RL
- Chapter 9: Policy Gradients: an alternative
- Chapter 10: Actor-Critic method
- Chapter 11: Asynchronous Advantage Actor-Critic
- Chapter 12: Chatbots traning with RL
- Chapter 13: Web navigation
- Chapter 14: Continuous action space
- Chapter 15: Trust regions: TRPO, PPO and ACKTR
- Chapter 16: Black-box optimisation in RL
- Chapter 17: Beyond model-free: imagination
- Chapter 18: AlphaGo Zero
This is the code repository for Deep Reinforcement Learning Hands-On, published by Packt. It contains all the supporting project files necessary to work through the book from start to finish.
Recent developments in reinforcement learning (RL), combined with deep learning (DL), have seen unprecedented progress made towards training agents to solve complex problems in a human-like way. Google’s use of algorithms to play and defeat the well-known Atari arcade games has propelled the field to prominence, and researchers are generating new ideas at a rapid pace.
Deep Reinforcement Learning Hands-On is a comprehensive guide to the very latest DL tools and their limitations. You will evaluate methods including Cross-entropy and policy gradients, before applying them to real-world environments. Take on both the Atari set of virtual games and family favorites such as Connect4. The book provides an introduction to the basics of RL, giving you the know-how to code intelligent learning agents to take on a formidable array of practical tasks. Discover how to implement Q-learning on ‘grid world’ environments, teach your agent to buy and trade stocks, and find out how natural language models are driving the boom in chatbots.
All of the code is organized into folders. Each folder starts with a number followed by the application name. For example, Chapter02. The code will look like the following:
def get_actions(self):
return [0, 1]