-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pivy does request and complete but doesn't hint or fill #34
Comments
Is it true that the larger code prompts might cause pivy to stop working? |
From the attached image, it looks like the request was being sent to Ollama instance, but it didn't receive any response. When this happens, I usually restart my Ollama instance and the problem gets resolved. As long as code prompts fit in the context window (16k for |
That's odd. When posting a small file like:
Works like a charm... |
One way of checking whether Ollama is causing trouble or not is by copy-pasting the prompt from the VSCode logs and trying it on Ollama CLI. |
will do tomorrow, I'll let you know |
I created a txt file: prompt.txt
Than I ran:
I got back a whole stream:
|
Apologies for the delayed response. I tested it with large-size files from my end too. The inline suggestions are not displayed when the inference times are longer (~5 seconds or longer). My guess is probably some sort of timeout logic from the VSCode editor end. In the output attached above too, I see the total inference duration is around ~8 seconds. Probably, that's why you are not seeing the editor suggestions too. I haven't yet completely tracked this down due to time constraints. Once I made some progress, will update here. |
Good debugging!
|
Quite possible. The key point per my understanding, is for the files you want to run Privy on, the inference time from your Ollama host shouldn't take more than a few seconds. If you can speed up your local Ollama instance, please give it a try. |
Tell us what you need help with.
I'm running pivy locally on docker and it works. Sending prompts gives me results.
But, for the inline editor neither the autocomplete nor manual mode give hints (suggestions) in the editor.
the model that i use:
deepseek-coder:1.3b-base
The output:
Any thoughts? What else can I provide you with?
The text was updated successfully, but these errors were encountered: