Note: All images in this directory, unless specified otherwise, are licensed under CC BY-NC 4.0.
Figure number | Description |
---|---|
17-1 | The AWS DeepRacer one-eighteenth-scale autonomous car |
17-2 | The AWS login console |
17-3 | Workflow for training the AWS DeepRacer model |
17-4 | Creating a model on the AWS DeepRacer console |
17-5 | Track selection on the AWS DeepRacer console |
17-6 | Defining the action space on the AWS DeepRacer console |
17-7 | Reward function parameters (a more in-depth review of these parameters is available in the documentation) |
17-8 | Visual explanation of some of the reward function parameters |
17-9 | An example reward function |
17-10 | Training graph and simulation video stream on the AWS DeepRacer console |
17-11 | Model evaluation page on the AWS DeepRacer console |
17-12 | Reinforcement learning theory basics in a nutshell |
17-13 | The DeepRacer training flow |
17-14 | Illustration of an agent exploring during an episode |
17-15 | Illustration of different paths to the goal |
17-16 | Training process for the vanilla policy gradient algorithm |
17-17 | Training using the PPO algorithm |
17-18 | Heatmap visualization for the example centerline reward function |
17-19 | Speed heatmap of an evaluation run; (left) evaluation lap with the basic example reward function, (right) faster lap with modified reward function |
17-20 | The test track layout |
17-21 | The Model upload menu on the AWS DeepRacer car web console |
17-22 | Driving mode selection menu on the AWS DeepRacer car web console |
17-23 | Model selection menu on AWS DeepRacer car web console |
17-24 | GradCAM heatmaps for AWS DeepRacer navigation |
17-25 | Duckietown at the AI Driving Olympics |
17-26 | Robocar from Roborace designed by Daniel Simon (image courtesy of Roborace) |