You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I have a Ubuntu server on which I have Ollama installed. I've already downloaded the codellama models recommended in the quick start guide and have exposed the ollama server to the network and adjusted the "providers" configurations to point to the server.
BUT, when I try using the extension to chat with the model, the server logs 404 errors:
Start the Ollama server on 0.0.0.0 to expose it to the network: OLLAMA_HOST=0.0.0.0:11433 ollama serve
Note that I'm using 11433 instead of 11434 because that is already taken by another Ollama container.
Adjust your providers like so (where the IP matches the server IP):
Try sending a message in the Twinny chat.
Expected behavior
For the Ollama server to respond and the extension to give an output.
Screenshots
If applicable, add screenshots to help explain your problem.
Logging
Rnable logging in the extension settings if not already enabled (you may need to restart vscode if you don't see logs). Proivide the log with the report.
API Provider
Ollama
Chat or Auto Complete?
Chat
Model Name
Using codellama:7b-code and codellama:7b-instruct
Desktop (please complete the following information):
Describe the bug
I have a Ubuntu server on which I have Ollama installed. I've already downloaded the codellama models recommended in the quick start guide and have exposed the ollama server to the network and adjusted the "providers" configurations to point to the server.
BUT, when I try using the extension to chat with the model, the server logs
404
errors:In the VSCode extension, I get this message:
To Reproduce
curl -fsSL https://ollama.com/install.sh | sh
, as described here.ollama pull codellama:7b-instruct
ollama pull codellama:7b-code
0.0.0.0
to expose it to the network:OLLAMA_HOST=0.0.0.0:11433 ollama serve
11433
instead of11434
because that is already taken by another Ollama container.Expected behavior
For the Ollama server to respond and the extension to give an output.
Screenshots
If applicable, add screenshots to help explain your problem.
Logging
Rnable logging in the extension settings if not already enabled (you may need to restart vscode if you don't see logs). Proivide the log with the report.
API Provider
Ollama
Chat or Auto Complete?
Chat
Model Name
Using
codellama:7b-code
andcodellama:7b-instruct
Desktop (please complete the following information):
Additional context
Full log of ollama server:
The text was updated successfully, but these errors were encountered: