Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Own reward function and environment, how to use model? #120

Open
onkarrai06 opened this issue Jul 9, 2024 · 0 comments
Open

Own reward function and environment, how to use model? #120

onkarrai06 opened this issue Jul 9, 2024 · 0 comments

Comments

@onkarrai06
Copy link

Hi, I am a student trying to implement a project and want to use your models for my MARL project.

However, I am finding it incredibly confusing as to how I would use your model to train with my own reward function and my own environment. I am working on highway-env highway-v0 and I see that you have a .json file that is a config file with the details, but what if I want to use my own environment that has my own reward functions and not just changes like controlled_vehicles.

Another question is I have is that I saw upon running your code that it launches a simulated environment which shows how the ego-vehicles learn with time and I am not sure how I would do that with my own environment. I would appreciate if you could take some time to guide me on this.

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant