Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimal Configuration #31

Open
HRezaei opened this issue Aug 31, 2024 · 0 comments
Open

Optimal Configuration #31

HRezaei opened this issue Aug 31, 2024 · 0 comments

Comments

@HRezaei
Copy link
Contributor

HRezaei commented Aug 31, 2024

Hi @ClementRomac,

This is not a real issue but a question. If you enable discussion sections of the repo, it'd fit there better:
I’m wondering what are the optimal configurations for running the experiments. I tried running train_language_agent.py using almost the same configuration as in experiments/configs/multi-node_slurm_cluster_config.yaml and experiments/campaign/Mixed_training/GFlan-T5_large.slurm on 8x A100 80GB GPUs but it was slow (almost 2 frames per second) and when I try to play with configurations to improve speed, for example increasing the mini-batch size, it faces the Cuda out of memory error or errors related to all-NaN tensors (I guess vanishing gradients?). So, it’d be appreciated if you could provide some hints about what configuration on what hardware yields results similar to the paper, and with what speed (frames per second), please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant