You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is not clear which backend is the optimal to use for what hardware. I have an intel core ultra 7 155h on windows. Is it better to use the openvino CPU backend or the oneMKL backend? Or would the openvino GPU backend be faster. Or should I try to make SYCL work, though there is no official support for it on windows, and it doesn't seem to be actually using the GPU. Would a quantized model with a custom audio context size make it faster than GPU on this processor? Also is it better to use base.en with default context or small.en with lowered context to make it faster?
The text was updated successfully, but these errors were encountered:
It is not clear which backend is the optimal to use for what hardware. I have an intel core ultra 7 155h on windows. Is it better to use the openvino CPU backend or the oneMKL backend? Or would the openvino GPU backend be faster. Or should I try to make SYCL work, though there is no official support for it on windows, and it doesn't seem to be actually using the GPU. Would a quantized model with a custom audio context size make it faster than GPU on this processor? Also is it better to use base.en with default context or small.en with lowered context to make it faster?
The text was updated successfully, but these errors were encountered: