Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming support #16

Open
matbee-eth opened this issue Aug 15, 2023 · 3 comments
Open

Streaming support #16

matbee-eth opened this issue Aug 15, 2023 · 3 comments

Comments

@matbee-eth
Copy link

Have you thought about / planned a way to support streaming audio instead of sending the entire audio clip? If its not currently supported, how would you solve it? I would appreciate some guidance to send a proper PR to support streaming, if possible.

@xenova
Copy link
Owner

xenova commented Aug 15, 2023

At the moment (w/ WASM backend), the latency for the encoder is just too much to do real-time streaming. Fortunately, the onnxruntime-web team have been busy improving their WebGPU backend, and it's at a stage where we can do testing with it now.

So, we hope to add support for it soon! If you're up for the challenge, you can fork transformers.js, build onnxruntime-web from source w/ webgpu support, and replace the import with the custom onnxruntime-web build.

@matbee-eth
Copy link
Author

matbee-eth commented Aug 15, 2023

By real-time I actually just mean streaming of an audio source (mic-in, generic audio out, file data, etc) -> whisper, making it as real-time as the tech allows, really. So basically, chunk up an audio to ~5-30 seconds each, and simply queue up the chunks to transcribe

I'll look into the webgpu, I was planning on seeing if this project would work with webgpu, so I'll take a look

@xenova
Copy link
Owner

xenova commented Aug 15, 2023

Well yes, you can just send 30-second chunks of audio to whisper, but as stated above, you won't get a response for at least 1-2 seconds due to the encoder latency. Then on top of that, depending how much you are decoding, you'd have to wait for the full chunk to be decoded before merging with the current predicted text.

That said, I do think this will be feasible with webgpu, at which point I'll probably take a look at this (unless you'd be interested in starting, working with the wasm backend for now).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants