Skip to content

Latest commit

 

History

History
58 lines (42 loc) · 4.35 KB

File metadata and controls

58 lines (42 loc) · 4.35 KB

Deep Reinforcement Learning Course with Tensorflow

Deep Reinforcement Course with Tensorflow

Deep Reinforcement Learning Course is a free series of blog posts and videos 🆕 about Deep Reinforcement Learning, where we'll learn the main algorithms, and how to implement them with Tensorflow.

📜The articles explain the concept from the big picture to the mathematical details behind it.

📹 The videos explain how to create the agent with Tensorflow

📜 Part 1: Introduction to Reinforcement Learning ARTICLE

Part 2: Q-learning with FrozenLake

Part 3: Deep Q-learning with Doom

Part 4: Policy Gradients with Doom

Part 3+: Improvments in Deep Q-Learning

📜 [ARTICLE (📅 JUNE)]

📹 [Create an Agent that learns to play Doom Deadly corridor (📅 06/30 )]

Part 5: Advantage Advantage Actor Critic (A2C)

📜 [ARTICLE (📅 June)]

📹 [Create an Agent that learns to play Outrun (📅 July)]

Part 6: Asynchronous Advantage Actor Critic (A3C)

📜 [ARTICLE (📅 July)]

📹 [Create an Agent that learns to play Michael Jackson's Moonwalker (📅 July)]

Part 7: Proximal Policy Gradients

📜 [ARTICLE (📅 July)]

📹 [Create an Agent that learns to play walk with Mujoco (📅 July)]

Part 8: TBA

Any questions 👨‍💻

If you have any questions, feel free to ask me:

📧: [email protected]

Github: https://github.com/simoninithomas/Deep_reinforcement_learning_Course

🌐 : https://simoninithomas.github.io/Deep_reinforcement_learning_Course/

Twitter: @ThomasSimonini

Don't forget to follow me on twitter, github and Medium to be alerted of the new articles that I publish

How to help 🙌

3 ways:

  • Clap our articles and like our videos a lot:Clapping in Medium means that you really like our articles. And the more claps we have, the more our article is shared Liking our videos help them to be much more visible to the deep learning community.
  • Share and speak about our articles and videos: By sharing our articles and videos you help us to spread the word.
  • Improve our notebooks: if you found a bug or a better implementation you can send a pull request.