Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory requirement for training on the conll-2012 corpus #7

Open
thomwolf opened this issue May 17, 2017 · 2 comments
Open

Memory requirement for training on the conll-2012 corpus #7

thomwolf opened this issue May 17, 2017 · 2 comments

Comments

@thomwolf
Copy link

Hi, I am trying to train your model on a AWS p2x instance (with a 12 Go K80 GPU) on the Conll-2012 corpus (2802 documents in the training dataset). The training eats all (RAM) memory (64 Go) in less than 30 % of the first epoch and gets killed before finishing it.

I was wondering on what type of machine you trained it ?
Is 64 Go of RAM too small for training on the conll corpus ?

@julien-c
Copy link

+1

@clarkkev
Copy link
Owner

Huh, that strange. I trained the model on a 128G machine, but I don't think it should use more than 10G RAM. Do you know what's taking up all the memory? What is the size of the data/features directory created during preprocessing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants