Replies: 1 comment 1 reply
-
You need to get a |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Edit - sorry I guess this should be under Issues - I'm new to GitHub and don't see where to delete this so opening an Issue...
Hi, firstly, great work, this runs acceptably on my non GPU laptop, whereas the standard release wasn't useable at all :-)
I got fairly poor results trying to translate japanese language audio with the base model, so I tried to use the gpt-2-345M model but I got the following, even when I had 8GB+ of currently available RAM. Is there any way to configure this to work or does the larger model require a vast amount of RAM?
The "mem required" to load the model looks fine, but "ggml ctx size" = 17592179909347.84 MB?!
whisper_model_load: loading model from 'ggml-model-gpt-2-345M.bin'
whisper_model_load: n_vocab = 50257
whisper_model_load: n_audio_ctx = 1024
whisper_model_load: n_audio_state = 1024
whisper_model_load: n_audio_head = 16
whisper_model_load: n_audio_layer = 24
whisper_model_load: n_text_ctx = 1
whisper_model_load: n_text_state = 50257
whisper_model_load: n_text_head = 1
whisper_model_load: n_text_layer = 289
whisper_model_load: n_mels = 74240
whisper_model_load: f16 = 19070976
whisper_model_load: type = 4
whisper_model_load: mem_required = 2608.00 MB
whisper_model_load: adding 50257 extra tokens
whisper_model_load: ggml ctx size = 17592179909347.84 MB
ggml_new_tensor_impl: not enough space in the context's memory pool
Beta Was this translation helpful? Give feedback.
All reactions