- Bipedal walker using Soft Actor-Critic
- A small bug fix for SAC agent, while loading a pretrained model
- DoubleDQN doesn't utilize DoubleQLearning
- Addition of q_networks
- Removing terminal state for Pendulum-v0
- Changes in parameter description for activation functions in ANN module
- Changes in the Pendulum-v0 implementation
- Forward function update for softplus_function.hpp
- Addition of Normal Distribution to ANN module
- Accessor and Mutator for action in q_learning
- Added cartpole-dqn notebook
- Addition of DuelingDQN and Noisy linear layer
- Addition of Noisy DQN to QLearning
- N-step learning for DQN
- Adding support to use gym_tcp_api for training agents in q_learning
- Doc changes for q_learning
- Changes corresponding to mlpack repo in cartpole_dqn
- Changes to the network initialization methods in Q_learning networks
- Adding notebook for solving acrobot using dqn
- Adding notebook for solving mountainCar using DQN
- Added the pendulum notebook
- Addition of Categorical DQN
- Added LunarLander-v2 env with dqn notebook
- Addition of Soft Actor-Critic to RL methods
- Adding SAC example for pendulum env
- Added support for multiple actions in action space for Soft Actor Critic