You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Oren,
I am training my 3GB corpus. I am doing it on the clusters that have 27 GB memory limitation. I encounter :
cupy.cuda.memory.OutOfMemoryError.
Is it possible in some way to limit the memory that code uses? Or split the corpus file and do the training in steps? Or changing some arguments to use less memory?
Thanks.
The text was updated successfully, but these errors were encountered:
context2vec should run fine with a 12 GB GPU (I trained it on a K80). When you say 27 GB, you may mean the total memory of multiple GPUs (?), so maybe that's the problem. In any case, splitting the corpus file will not help. Probably the best thing to do is to lower the --batchsize below the default value of 100.
Hi Oren,
I am training my 3GB corpus. I am doing it on the clusters that have 27 GB memory limitation. I encounter :
cupy.cuda.memory.OutOfMemoryError.
Is it possible in some way to limit the memory that code uses? Or split the corpus file and do the training in steps? Or changing some arguments to use less memory?
Thanks.
The text was updated successfully, but these errors were encountered: