Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrating pywhispercpp as the first extension to lollms-webui #15

Open
ParisNeo opened this issue Jul 2, 2023 · 2 comments
Open

Integrating pywhispercpp as the first extension to lollms-webui #15

ParisNeo opened this issue Jul 2, 2023 · 2 comments

Comments

@ParisNeo
Copy link

ParisNeo commented Jul 2, 2023

Hi Abdeladim. I finally start to write extensions to lollms and I was thinking that first extension should be audio in and audio out. But I need to comply with my rule number 1: Every thing should be done locally. No data is sent anywhere out of your PC.

To do this, I think whisper is really cool. Even cooler is whispercpp, but since I use python, i need pywhispercpp :)

Do you have an example of your code that uses direct input stream from microphone? That would simplify the integration greatly.

@abdeladim-s
Copy link
Owner

Hi Saifeddine,

That's great to hear. We'll finally see lollms with audio :)

Yes I have done an example that uses direct input stream from the mic, it is the assistant example.
I created it in this way because I had the idea to deploy it in a raspberry PI as an assistant.

The class Assistant is already ready to be imported and you only need to pass a callback to it. Otherwise you can tweak the source code to serve your needs. It's fairly simple.

Let me know if you need anything else.

@ParisNeo
Copy link
Author

ParisNeo commented Jul 3, 2023

Wonderful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants