You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a minimum VRAM requirement for local (on-premise) inference?
I have two RTX A4000 GPUs (16GB VRAM each).
Is it possible to run inference locally using this setup?
Note: For Vision 11B multi-modal inference.
The text was updated successfully, but these errors were encountered:
Is there a minimum VRAM requirement for local (on-premise) inference?
I have two RTX A4000 GPUs (16GB VRAM each).
Is it possible to run inference locally using this setup?
Note: For Vision 11B multi-modal inference.
The text was updated successfully, but these errors were encountered: