You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to train your model on a AWS p2x instance (with a 12 Go K80 GPU) on the Conll-2012 corpus (2802 documents in the training dataset). The training eats all (RAM) memory (64 Go) in less than 30 % of the first epoch and gets killed before finishing it.
I was wondering on what type of machine you trained it ?
Is 64 Go of RAM too small for training on the conll corpus ?
The text was updated successfully, but these errors were encountered:
Huh, that strange. I trained the model on a 128G machine, but I don't think it should use more than 10G RAM. Do you know what's taking up all the memory? What is the size of the data/features directory created during preprocessing?
Hi, I am trying to train your model on a AWS p2x instance (with a 12 Go K80 GPU) on the Conll-2012 corpus (2802 documents in the training dataset). The training eats all (RAM) memory (64 Go) in less than 30 % of the first epoch and gets killed before finishing it.
I was wondering on what type of machine you trained it ?
Is 64 Go of RAM too small for training on the conll corpus ?
The text was updated successfully, but these errors were encountered: