Spacy high memory consumption issue #13194
Replies: 5 comments 3 replies
-
Here's a thread that covers the memory usage related to the vocab: #10015 If you're only using a |
Beta Was this translation helpful? Give feedback.
-
Hi @adrianeboyd , I have not installed torch module to use spacy. exclude = [ "parser","textcat","tagger","tokenizer",] I am using above code to load spacy , as I observed from logs english model taking around 550-600MB pod memory. |
Beta Was this translation helpful? Give feedback.
-
I just wanted to user 'ner' from en_core_web_md. |
Beta Was this translation helpful? Give feedback.
-
Thanks @adrianeboyd , yeah I have exclude other components and I found rather than passing long text, if we pass splited sentences memory consumption reduced, but still memory leakage is there... pod memory increases slowly. as number of request increases it may exceed highest limit. ` detected_sentences = [] nlp = spacy.load("en_core_web_md", exclude=['tok2vec', 'tagger', 'parser', 'senter', 'attribute_ruler', 'lemmatizer']) docs = list(nlp.pipe(detected_sentences)) |
Beta Was this translation helpful? Give feedback.
-
I am using the below code for loading spacy and it is getting memory intensive as the text size increases. Is there a way to reduce memory usage or limit the length of text for specific ram usage ?
|
Beta Was this translation helpful? Give feedback.
-
Hello,
I am running spacy model with english medium weights inside kubernetes pod.
As I am observing after loading spacy model, it's taking around 500mb space and while after every prediction it keeps increasing.
I am wondering even after deleting spacy object then also it's not releasing memory.
I have allotted around 1GB memory resources to my pod, but after few hours it consume complete memory and stuck pod.
Could you please provide solutions how to release memory and ensure memory should not increase with number of predictions
Beta Was this translation helpful? Give feedback.
All reactions