Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about rl-agents #109

Open
HodaHo opened this issue Dec 27, 2023 · 1 comment
Open

Question about rl-agents #109

HodaHo opened this issue Dec 27, 2023 · 1 comment

Comments

@HodaHo
Copy link

HodaHo commented Dec 27, 2023

Thanks for the great work and sharing it.
As a beginner, I have read the items and folders related to HighwayEnv, and I understood it to a large extent according to the document, but I have some doubts about the rl-agents library, please give an explanation about each of the folders that are related What is it for?
In the rl_agents folder, we have agents and trainer folders, what is each one for? We have the same in the test folder. The scripts folder contains config and experiments, what are these for?
I want to train the intersection with DQN with this library (not with stableBaseline for example) and test the training, which items should I use?
I apologize if this is a beginner's question

@eleurent
Copy link
Owner

eleurent commented Jan 3, 2024

we have agents and trainer folders, what is each one for?

The agents folder defines RL algorithms (an agent interact with an environment and updates its internal model), while trainer contains the evaluation.py file which is simply interfacing the agent with an environment (the name evaluation was probably a bad choice: here we evaluate the RL algorithm on a given environment by using it to train a policy).

We have the same in the test folder

These tests are just unit tests to check specific parts of the code, they are not meant to "test" an agent.

The scripts folder contains config and experiments, what are these for?

experiements.py is a script that runs an Evaluation of a given agent on a given environment, both defined by some configuration. The config folder contains json configuration files specifying various experiments with different envs and agents.

I want to train the intersection with DQN with this library (not with stableBaseline for example) and test the training, which items should I use?

So you should just run python experiments.py evaluate env_config.json agent_config.json --train --episodes=N to train a policy, and then python experiments.py evaluate env_config.json agent_config.json --test --episodes=5 to test the training with the latest checkpoint

But I would still recommend checking out SB3, which is a much better and well-maintained library than this humble one :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants