-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I don't know how to use autocomplete or it is not working for me. #20
Comments
One of our users faced the same issue. It happened as this picked the wrong model variant for autocompletion. Please use |
Thank you for your help. I just started it again with the mission to record a video for you and now, all of a sudden, it seems like it is working. There is some gray text that shows up that is not really an autocomplete suggestion but it does look similar to your screenshot in your README.md. So, instead of getting autocomplete suggestions, I'm actually getting this: I guess this is just LLM being stupid, nothing to do with your extension. I believe my previous inability to make this work was due to me not fully understanding how to work with Ollama on windows. It seems to me now that each and every time I type By the way, my advice would be to add timestamps to your log lines, since now I can't tell when was that log output generated. Was it generated right now, or am I looking at something from few minutes back? I can't tell. If each log line would start with a unix timestamp, or a simple clock like XX:YY:ZZ, then I would be able to follow the logs. Since it is now kind of working, I will be running it for a few days to see how it behaves. Will report back here. Thanks. |
Thanks a lot for the detailed write-up. This helps us assist other users too! As for autocomplete suggestions, please use only Hope this helps! |
Ah yes, thank you for the clarification. It is so easy to get lost in these Ollama models. OK, so now I have the situation that I originally reported: everything is running, but Privy's autocomplete suggestions are either non-existent, also blocking VSCodium's autocomplete box, or sometimes, very rarely, are shown on the screen but are complete nonsense. Let me explain where I am coming from: I'm a professional full stack developer and I work on a project with ~30000 source code files, some Perl, some SQL, some HTML, some JavaScript and some CSS. For the last several years I've been using Atom IDE with TabNineLocal, a local CPU-bound autocomplete AI engine which, honestly, felt like miracle from day one. TabNine could easily guess my whole lines so my coding would just be: type a few letters, hit TAB a few times and the line is finished. TabNine could never write whole code snippets for me, that's true, but would supercharge autocomplete so much that my coding would become 10x faster. I read people saying the same thing about Github Copilot, but I don't want to run anything that can't work locally on my machine. I was hoping Privy would be the same or better. Unfortunately, right now, for the reasons I am not entirely sure about, Privy is, more or less, not working at all for me. My VSCodium's built-in autocomplete works decent enough when Privy is disabled, but, when Privy is enabled with these Running Running I guess these LLM models can't really digest huge Perl files, and also, I guess, Privy hangs while waiting for the LLM, thus blocking VSCodium's autocomplete suggestions box. These are my early experiences with Privy+Ollama. Please let me know if I can provide any debug info. Thanks. |
This is a mistake from our side. We didn't add support for
There is a manual mode setting for Privy's autocomplete feature. You can use it in-tandem with your VSCodium editor autocomplete suggestions. I do agree with the overall sentiment of your observation. If the tool doesn't help you with becoming more productive, there is no point in using it. The space of coding LLMs is moving fast and we're working towards integrating the relevant pieces into Privy. Hopefully, in the coming days, the output will match your expectations 😃. |
We ran benchmarks for For autocompletion, the recommended model is Please take a look at recommended models section in our The benchmark image is generated using benchllama. You can use it for testing different models on your system and choose accordingly. |
Hi,
Thank you for this great software.
Unfortunately, I can't make autocomplete work on my computer.
This is on Windows 10 Pro x64, VSCodium v1.85.1, Release 23348, Privy v0.2.7.
The code explanation panel is working fine, at least that one shows errors so I could fix the problems and get it working.
The autocomplete function, on the other hand, I have no idea how to make it work, and I don't understand why it is not working.
As soon as I try to turn it on, my VSCodium's autocomplete stops working altogether.
Ollama is working fine, Privy looks like it is connecting fine, the LLM model seems to be matched correctly, the Privy's VSCodium output pannel is showing like it is working correctly, sifting through my file source lines, but autocomplete is nowhere to be found.
Please let me know if I can provide some more debug info.
Thank you for your help.
The text was updated successfully, but these errors were encountered: