-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hardware requirements #21
Comments
Hi @antgr, I used one of the lambda machine https://lambdalabs.com/deep-learning/workstations/4-gpu to train the model. It's probably the GPU memory that cause the problem for you. I'll have more refined answer later on. |
Hi @titipata is there any workaround that I could use to train with one gpu? Even if the final model will be less capable.. Specificity I would like to jointly train your model with another argumentation mining task. Do you think that could your model help me on the other task? |
@antgr I actually train with one GPU. However, the memory in GPU probably gets a bit high ~ 6-7 GB (from maximum 10 GB). I'd say the easiest workaround is to reducing batch size or size of the model. Definitely, I think this will help improve other tasks, specifically if argument mining task is in the science domain. |
Hi, I run the experiment in my machine and also in colab (https://colab.research.google.com/drive/10z-ZpmTRBIegicA4p9ueA_BOLet-7fHJ),
but my machine halts (1810932it [37:24, 2382,33it/s]) and so does colab (1748626it [21:51, 1.29s/it]).
So, what are the hardware requirements to run it smoothly?
The text was updated successfully, but these errors were encountered: