-
-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM support #132
Comments
Some time ago I had found a trick on GitHub which made it possible to "understand" the sentences for voice assistants |
I find this suggestion pretty interesting. In Dicio everything's set up to support a fallback skill when a user intent could not be interpreted by any skill. Currently the fallback skill just says "Sorry I could not understand". If there is an API we can integrate into Dicio without asking users to get developer keys in order to access ChatGPT it would be great. I don't know what the policy of ChatGPT is with regards to third party clients. |
|
Yeah I know about that, the problem is I would need to obtain a developer key and then share it with everybody (since the app is FOSS), so people would easily steal it. |
Maybe, you can create a file with your secrets which you uncheck from version control or use GitHub's secrets. However the build of F-Droid would not use it but you can specify it or do a separate build. |
Otherwise there is a free and open source chatbot from which Dicio could draw inspiration to gain discussion skills |
We can let user handle it and give them options, if they want it put the dev API key of him. That way you dont have to worry about it. So bascally if i want that feature ill add my API Key and get it working. |
I don't want to limit the F-Droid build. I'd rather do what Tcache-Nukes said. |
Sounds Good, i will look foward to test it. If you are gonna do that. |
Otherwise there is another option to counterbalance open GPT which wants to be free free and open source, it's called open assistant it's free open source and it's under construction currently it requires data but it can be a good idea to do later |
What about allowing to the user to set the ChatGPT API token?, so everybody uses it's own account info. |
Hi, |
I had an experiment with ChatGPT today, and used a starting prompt that gets it into a mode that I would find useful.
Reply
Of course, I came to Dicio for the open source, so using the ChatGPT API would go against that. Interesting nonetheless. |
Yeah I also thought it would work with those kind of prompts |
Open Assistant does well too!
It's not perfect though, spot the error given the prompt "Wake me up at five fifteen tomorrow":
|
With the developments that has happened in the last 3 - 4 weeks, I believe, we can make use of privateGPT which is based on LangChainAI or directly use GPT4All or Huggingface.co/chat and etc., This would greatly improve usage. Forgot to mention this: https://open-assistant.io |
This comment was marked as resolved.
This comment was marked as resolved.
I would like very much that, but gpt is closed source and honestly I would prefer an open alternative. So thanks to @pixincreate, also I have read on FAQ that open assistant can run on locale, so no problem for privacy. |
It would be really useful to have an easy way to configure it to fallback to a chatgpt-compatible LLM, specifically localai. Multiple fallbacks would be even better. Still looking for a way to set everything up to use the llama-index vectorstores |
why not just have a setting for people to set an openai compatible endpoint with an api token? Or use something like litellm. Will allow for people to use their own selfhosted LLM servers or public ones. |
I've been experimenting with ollama running on my phone, although I'll be more likely to set up a local server. Would be cool anyway.
Johan Swetzén
…-------- Ursprungligt meddelande --------
Den 2025-01-06 00:27, fraschm1998 skrev:
why not just have a setting for people to set an openai compatible endpoint with an api token? Or use something like litellm. Will allow for people to use their own selfhosted LLM servers or public ones.
—
Reply to this email directly, [view it on GitHub](#132 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/AAYTHPXQNKWMTVBFXWOGMAT2JG5WHAVCNFSM6AAAAAATOMF766VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNZRG44DONZUGM).
You are receiving this because you commented.Message ID: ***@***.***>
|
This app I built during an internship last year runs llama.cpp on-device, and a fine-tuned phi-2 model turned out to be enough to build an assistant. Unfortunately it is slow though. https://gitlab.e.foundation/e/os/open-source-llm-assistant |
I also want to replace google with something local and open-source. I run a home-assistant server, and I was going to use their home-llm plugin to get the functionality. That is great, but I realized that it would not be able to forward requests back to the phone such as open some app, or set a timer. At this point, Dicio appears to be the most promising solution, but I'd like to explore the possibility of having a compact LLM (Large Language Model) that can handle intents locally on-device. This would allow us to offload some work from individual skills and create more streamlined functionality without needing a separate skill for each minor task. |
Can we just add new skill like "ASK GPT {your qustion}"? |
Can we use opengptchat API to make it smarter than things like siri and google. That would be so cool with intergration of openai.
The text was updated successfully, but these errors were encountered: