Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Ollama servers? #64

Open
a-singer opened this issue Feb 14, 2024 · 2 comments
Open

Support for Ollama servers? #64

a-singer opened this issue Feb 14, 2024 · 2 comments

Comments

@a-singer
Copy link

Hi all,

I continue to be astonished at the speed and quality of development here, thanks to everyone involved for something which has made life better and easier. I thought I would ask, if I may, for a feature which might be more useful as time passes. Ollama
https://ollama.com/

is a method through which LLMs can be run on the local intranet. There is an accessible option for these, though at an early stage, at

https://github.com/chigkim/VOLlama/

Would it be possible for the add-on to support sending data to an Ollama server/model? It would be particularly nice to be able to send NVDA's objects to the model as well, of course, as allowing work with the local models through the central dialogue. Thanks for looking into whether this would be possible.

@aaclause
Copy link
Owner

Hello,
Yes, it is in progress. In #62. The next release should include this :)
Thanks

@a-singer
Copy link
Author

a-singer commented Feb 14, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants