We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello,
I am interested in using pywhispercpp for speech recognition and speaker diarization.
I have installed the library and followed the instructions in the README file, but I am not sure how to use it for my use case.
Could you please provide some guidance or examples on how to make transcription and speaker diarization using pywhispercpp?
Note: I'm using google colab.
Thank you.
The text was updated successfully, but these errors were encountered:
Thank you for your interest in the project.
The examples folder contains a lot of examples already, you can go through them to see how it works. I think they will work on colab as well.
colab
The documentation could also be helpful to understand the API.
Please note that speaker diarization is not supported by whisper, it is trained on transcription only.
whisper
I hope this will be helpful.
Sorry, something went wrong.
No branches or pull requests
Hello,
I am interested in using pywhispercpp for speech recognition and speaker diarization.
I have installed the library and followed the instructions in the README file, but I am not sure how to use it for my use case.
Could you please provide some guidance or examples on how to make transcription and speaker diarization using pywhispercpp?
Note: I'm using google colab.
Thank you.
The text was updated successfully, but these errors were encountered: