This repository has been archived by the owner on Nov 3, 2023. It is now read-only.
Replies: 3 comments 7 replies
-
I encountered same problem on google colab with Tesla T4 (16 GB VRAM, out of which less than 11GB was used) and RAM that was under use was about 3GB. |
Beta Was this translation helpful? Give feedback.
0 replies
-
the model is optimized for GPU usage; responses in CPU-only modes may be quite slow (these are big models, after all) |
Beta Was this translation helpful? Give feedback.
3 replies
-
A common confounder here is when someone accidentally installs CPU-only pytorch |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I am testing BlenderBot2 and the response is a quite slow.
I tried this: parlai interactive -mf zoo:blenderbot2/blenderbot2_400M/model --search-server relevant_search_server
And it took over 2 minutes to get an answer to my "Hello"
Enter Your Message: hello
[BlenderBot2Fid]: Hello, how are you today? I hope you have a great day! BINGO
My system information is the following:
Beta Was this translation helpful? Give feedback.
All reactions