Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Switch whisper to onnx runtime #164

Closed
thewh1teagle opened this issue Jul 8, 2024 · 1 comment
Closed

[Feature Request]: Switch whisper to onnx runtime #164

thewh1teagle opened this issue Jul 8, 2024 · 1 comment
Assignees
Labels

Comments

@thewh1teagle
Copy link
Owner

Describe the feature

Currently sherpa-onnx supports coreml and cuda.
Once it will support rocm for amd GPUs we'll switch to use it with sherpa-rs bindings.

Tracking issue: k2-fsa/sherpa-onnx#196

@thewh1teagle thewh1teagle self-assigned this Jul 8, 2024
@thewh1teagle
Copy link
Owner Author

thewh1teagle commented Jul 9, 2024

Tasks before:

[ ] (Maybe we'll keep whisper.cpp for stt) Wait for ggerganov/whisper.cpp#2279 merge
[ ] Whisper model format of onnx it too complicated. needs to find a way to pass single file so users can change it
[ ] Fix #167
[ ] Add migrate UI that will delete old ggml file and download new model
[ ] Check that speed is good like whisper.cpp
[ ] Prepare faster-whisper-v2-d3-e3/tree/main in onnx format
[ X ] Implement Max speakers for diarize
[X ] Fix k2-fsa/sherpa-onnx#1085
[ ] Add amd support to sherpa k2-fsa/sherpa-onnx#196
[ ] Check that it return Hebrew correctly
[ ] Check that it works on mac, Linux, Windows

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant