Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR address?
This is a first draft for integrating OpenVINO (if so desired).
Background: I was working on a audio streaming / transcription application on the raspberry pi 4 and wanted to get out
as much performance as possible and OpenVINO was the way.
I am new to bazel and spent a lot of time figuring out how to get the basics working so excuse
the crudity of the change. I have not yet found out how to get download resources or toggle features
like OpenVINO into bazel, any help would be appreciated.
Since OpenVINO support is newer than the last support whisper.cpp version I bumped the
version to a point in time that worked well for the normal and OpenVINO case alike.
This brought up two deprecations (
N_MEL
constant was removed and two calls without context needcontext now).
Maybe of interest but slightly unrelated: OpenVINO is a static graph so if you are using smaller audio context
sizes to increase inference speed you must create a OpenVINO graph with that audio
context. You must patch (or write your own) the OpenAI
whisper.load_model
function toinclude something like this: